Good day all! By way of introduction: I learned about Genode because of the recent release's experimental support for seL4. Then I noticed that one of the open challenges is to implement a better architecture for direct-rendered 3D. I've been hacking on X for years and wanting to play with 3D on a microkernel, so now I'm trying to figure out what it'd take to tackle the Mesa rework challenge.
As a first step, I had trouble following the "getting started" directions. I've filed/commented on GitHub issues for code related things. In the web site documentation, you might mention that you're relying on SDL version 1.2, not the current version 2.0 (in Debian, libsdl1.2-dev). And to get the isohybrid tool I needed the Debian package named syslinux-utils.
http://genode.org/documentation/developer-resources/getting_started
Now that I can run the linux_x86 and okl4_x86 demos, what steps would you recommend for trying to prototype a multi-process 3D infrastructure?
Jamey
Hi Jamey!
Welcome to Genode. I'm also interested in 3D and GPU architecture, though not doing much Genode hacking recently. I suggest that you use the NOVA or Fiasco.OC kernels because they're the primary platforms.
Could you elaborate on what you mean by a multi-process 3D?
From the architectural point of view the Linux GPU stack is itself very modular
and designed similiarly to a microkernel OS - the clients share nothing, everyone just allocates buffers for the GPU data and command buffers via libdrm, and then submits them to the kernel-side driver which does additional verification, sets up the MMU and does other steps to prepare the GPU for executing the code.
There are some use-cases which require the transfer from a GPU memory buffer from one client to another. Most notable examples are using the hardware encoders and decoders. If you are interested in that, take a look at the libva (VAAPI) and weston-recorder, there's an example of how to use the Intel GPU to H.264 encode what was rendered by OpenGL. (read the sources: https://github.com/hardening/weston/blob/master/src/vaapi-recorder.c#L1024
and also my blog post: http://allsoftwaresucks.blogspot.ru/2014/10/abusing-mesa-by-hooking-elfs-and...)
So I think an interesting use-case would be replicating or porting weston - creating a GPU-backed compositing window manager.
In the Linux world, memory sharing in the DRM subsystem is done via the Unix Domain Sockets. Each memory buffer allocated by the libDRM and libGBM can be attached to a file descriptor via the ioctl called "flink". Since it's an fd, you can then pass it to another process via a Domain Socket.
If you are going to design a "secure" system for sharing the GPU resources on top of Genode, I suggest to consider the following things: * There should probably be an intermediate resource management server between the kernel/libdrm container and the app. * You should think of whether you want to allow multiple clients to access the same buffer simultaneously or make the access exclusive. * In the latter case you need to figure out how to guarantee exclusivity. Since buffers are basically chunks of memory, you will probably have to write a custom pager (memory manager) that will handle page faults when a client is prohibited from accessing memory and return the error to the client somehow. * An interesting problem is to prove the exclusive access to the resources when they are not mapped into the client's address space, but already uploaded to the GPU and therefore controlled by some handle (basically an unsigned integer indexing some array in the GPU memory)
On Fri, Jul 31, 2015 at 2:03 AM, Jamey Sharp <jamey@...343...> wrote:
Good day all! By way of introduction: I learned about Genode because of the recent release's experimental support for seL4. Then I noticed that one of the open challenges is to implement a better architecture for direct-rendered 3D. I've been hacking on X for years and wanting to play with 3D on a microkernel, so now I'm trying to figure out what it'd take to tackle the Mesa rework challenge.
As a first step, I had trouble following the "getting started" directions. I've filed/commented on GitHub issues for code related things. In the web site documentation, you might mention that you're relying on SDL version 1.2, not the current version 2.0 (in Debian, libsdl1.2-dev). And to get the isohybrid tool I needed the Debian package named syslinux-utils.
http://genode.org/documentation/developer-resources/getting_started
Now that I can run the linux_x86 and okl4_x86 demos, what steps would you recommend for trying to prototype a multi-process 3D infrastructure?
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
On Sun, Aug 02, 2015 at 07:33:07PM +0300, Alexander Tarasikov wrote:
Hi Jamey!
Welcome to Genode. I'm also interested in 3D and GPU architecture, though not doing much Genode hacking recently.
Thank you for the warm welcome! I'd have replied sooner if this week didn't get so busy.
I suggest that you use the NOVA or Fiasco.OC kernels because they're the primary platforms.
That was one thing I wondered, thanks!
Could you elaborate on what you mean by a multi-process 3D?
I was referring to the Genode "challenges" list, which mentions that "Genode 10.08 introduced Gallium3D including the GPU driver for Intel GMA CPUs." (I'm guessing this has bit-rotted somewhat since then? I haven't found where that code might live yet.)
It goes on to say that "the current approach executes the GPU driver alongside the complete Gallium3D software stack and the application code in one address space," which of course is undesirable for security, but also because it limits users to a single 3D client at a time.
I think what I want to do is:
- define an analogue of the Linux DRM API using Genode IPC, - port the Linux kernel generic DRM layer and the driver for Intel integrated graphics to this IPC interface (as part of dde_linux I guess?), - and port libdrm to the IPC interface.
I'm hoping that the libdrm abstraction layer is comprehensive enough that Mesa would not need much, if any, patching.
For testing, I imagine primarily using some EGL/libgbm/modesetting render-only demo, because I don't want to have to think about input APIs at the same time.
As you pointed out, I'd really like to wind up with a Wayland interface replacing Genode's Nitpicker. (Which is another wishlist item on the "challenges" page, I noticed.)
- There should probably be an intermediate resource management server
between the kernel/libdrm container and the app.
Agreed! In a complete implementation, something should keep track of how much video memory is available and share it fairly between clients. Bonus points if it also can provide a generic implementation of command scheduling, to keep any one client from starving other clients' access to the GPU.
That said, I'm hoping to get a single-client demo working without any resource management first. :-)
- You should think of whether you want to allow multiple clients to
access the same buffer simultaneously or make the access exclusive.
I think, to support the Wayland model, multiple clients need to be allowed to access the same buffer. But they shouldn't usually be trying to map the raw buffer contents into their local address space, right? That is a recipe for a performance disaster, especially on graphics cards with dedicated VRAM.
Jamey
Hello Jamey,
welcome to the list! Great that you are interested in picking up the GPU-related line of work.
I'd like to chime in because I conceived the original i915 GPU work 5 years ago.
On 10.08.2015 00:15, Jamey Sharp wrote:
I was referring to the Genode "challenges" list, which mentions that "Genode 10.08 introduced Gallium3D including the GPU driver for Intel GMA CPUs." (I'm guessing this has bit-rotted somewhat since then? I haven't found where that code might live yet.)
The state of my original port is roughly explained in the release notes of Genode 10.08:
http://genode.org/documentation/release-notes/10.08#Gallium3D_and_Intel%27s_...
We maintained this state until spring this year when we decided to abandon it until somehow becomes interested again. Now, just shortly after, you are showing up. ;-)
The code is still there but not regularly tested or maintained. The important pieces are:
* The port of the i915 GPU driver / the GEM subsystem of the Linux kernel. I ported the code via our DDE approach. But unlike all recent DDE-Linux-based drivers, the code resides in a separate repository:
https://github.com/genodelabs/linux_drivers/tree/master/src/drivers/gpu
We planned to add the revived version of this code to our new 'repos/dde_linux' repository within the Genode tree but haven't done so yet.
* The port of libdrm:
https://github.com/genodelabs/genode/blob/master/repos/libports/ports/libdrm...
https://github.com/genodelabs/genode/blob/master/repos/libports/lib/mk/libdr...
https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/libd...
As you can see in ioctl.cc, the code implements ioctl by simply calling the corresponding function of the GPU driver. Normally, we'd need to redirect those calls via RPC. But in my setup, I just co-located the GPU driver + libdrm + gallium3d + application within a single component.
* Mesa / Gallium3d, which is part of 'repos/libports/'.
* A custom EGL driver to interface Mesa with Genode:
https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/egl
* An example application and a corresponding run script:
https://github.com/genodelabs/genode/tree/master/repos/libports/src/app/eglg...
https://github.com/genodelabs/genode/blob/master/repos/libports/run/eglgears...
It goes on to say that "the current approach executes the GPU driver alongside the complete Gallium3D software stack and the application code in one address space," which of course is undesirable for security, but also because it limits users to a single 3D client at a time.
I think what I want to do is:
- define an analogue of the Linux DRM API using Genode IPC,
- port the Linux kernel generic DRM layer and the driver for Intel integrated graphics to this IPC interface (as part of dde_linux I guess?),
- and port libdrm to the IPC interface.
I hope that the pointers above will serve you well as a suitable starting point.
I'm hoping that the libdrm abstraction layer is comprehensive enough that Mesa would not need much, if any, patching.
That it consistent with my experience. As far as I remember, I have not modified Mesa at all.
As you pointed out, I'd really like to wind up with a Wayland interface replacing Genode's Nitpicker. (Which is another wishlist item on the "challenges" page, I noticed.)
I do not think that the replacement of Nitpicker by something else is strictly necessary as Nitpicker and Wayland share the same principle architecture.
- There should probably be an intermediate resource management server
between the kernel/libdrm container and the app.
Agreed! In a complete implementation, something should keep track of how much video memory is available and share it fairly between clients. Bonus points if it also can provide a generic implementation of command scheduling, to keep any one client from starving other clients' access to the GPU.
I would refer to this component simply as "GPU driver". It would contain both, the actual driver code that talks to the GPU and the code for multiplexing the GPU. I think that, given the re-use of the Linux kernel code, it would be quite difficult to separate those both concerns into two distinct components.
That said, I'm hoping to get a single-client demo working without any resource management first. :-)
- You should think of whether you want to allow multiple clients to
access the same buffer simultaneously or make the access exclusive.
I think, to support the Wayland model, multiple clients need to be allowed to access the same buffer. But they shouldn't usually be trying to map the raw buffer contents into their local address space, right? That is a recipe for a performance disaster, especially on graphics cards with dedicated VRAM.
Buffer objects are mapped directly into the application's address spaces. This is also the case on Linux where a custom page-fault handler manages the part of the address space where the /dev/drm device node is mapped via mmap. The code (and the overloading of the mmap arguments with different semantics by the i915 driver) is quite frightening. But in principle, the construct could work very similar on Genode where are have a proper interface for managing (parts of) virtual address spaces from a remote component. On Genode, each buffer object would be represented as a dataspace. But let us keep this topic for later.
Have fun with you exploration! For giving the existing code a try, I would recommend you to test a slightly older Genode version (like 14.11) where the i915 GPU driver was still known to work.
Cheers Norman
Thanks to your pointers, Norman, I've gotten as far as building Genode 14.11 for nova_x86_32 with the eglgears run script, with i915 added to SPECS.
I've booted the resulting .iso on a couple of Thinkpads from various eras, as well as in qemu of course. I don't actually get any gears rendering on any of them.
I don't expect it to work right under qemu since I don't think Intel integrated graphics is emulated there, but it's the only way I know to get debugging output so far. Under qemu, I see the following output:
[init -> launchpad -> init -> eglgears] native_probe* native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0 [init -> launchpad -> init -> eglgears] native_probe_result native_get_probe_result(native_probe*): not yet implemented [init -> launchpad -> init -> eglgears] falling back to softpipe driver [init -> launchpad -> init -> eglgears] returned from init display->screen [init -> launchpad -> init -> eglgears] no plugin found for fcntl(2) [init -> launchpad -> init -> eglgears] no plugin found for write(2) [init -> launchpad -> init -> eglgears] called, return 1 connector no RM attachment (READ pf_addr=9 pf_ip=481c4 from a2bfefc6 eglgears) virtual void Genode::Signal_session_component::submit(Genode::Signal_context_capability, unsigned int): invalid signal-context capability static void Genode::Pager_object::_page_fault_handler(): unhandled page fault, 'pager:eglgears' address=0x9 ip=0x481c4
So under qemu I guess eglgears crashes by dereferencing a bogus pointer. How can I get this console output on real hardware, to see if it's crashing the same way?
Also, Norman, do you remember exactly which hardware you tested this code on in 2010? I grabbed an old Thinkpad to try to match your setup more closely, so my test box has Intel 945GM graphics (PCI ID 8086:27a2) and a 32-bit Core Duo CPU. I may have gone a little too far back as it's a 2006 model, perhaps?
Jamey
On Mon, Aug 10, 2015 at 6:44 AM, Norman Feske <norman.feske@...1...> wrote:
Hello Jamey,
welcome to the list! Great that you are interested in picking up the GPU-related line of work.
I'd like to chime in because I conceived the original i915 GPU work 5 years ago.
On 10.08.2015 00:15, Jamey Sharp wrote:
I was referring to the Genode "challenges" list, which mentions that "Genode 10.08 introduced Gallium3D including the GPU driver for Intel GMA CPUs." (I'm guessing this has bit-rotted somewhat since then? I haven't found where that code might live yet.)
The state of my original port is roughly explained in the release notes of Genode 10.08:
http://genode.org/documentation/release-notes/10.08#Gallium3D_and_Intel%27s_...
We maintained this state until spring this year when we decided to abandon it until somehow becomes interested again. Now, just shortly after, you are showing up. ;-)
The code is still there but not regularly tested or maintained. The important pieces are:
The port of the i915 GPU driver / the GEM subsystem of the Linux kernel. I ported the code via our DDE approach. But unlike all recent DDE-Linux-based drivers, the code resides in a separate repository:
https://github.com/genodelabs/linux_drivers/tree/master/src/drivers/gpu
We planned to add the revived version of this code to our new 'repos/dde_linux' repository within the Genode tree but haven't done so yet.
The port of libdrm:
https://github.com/genodelabs/genode/blob/master/repos/libports/ports/libdrm...
https://github.com/genodelabs/genode/blob/master/repos/libports/lib/mk/libdr...
https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/libd...
As you can see in ioctl.cc, the code implements ioctl by simply calling the corresponding function of the GPU driver. Normally, we'd need to redirect those calls via RPC. But in my setup, I just co-located the GPU driver + libdrm + gallium3d + application within a single component.
Mesa / Gallium3d, which is part of 'repos/libports/'.
A custom EGL driver to interface Mesa with Genode:
https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/egl
- An example application and a corresponding run script:
https://github.com/genodelabs/genode/tree/master/repos/libports/src/app/eglg...
https://github.com/genodelabs/genode/blob/master/repos/libports/run/eglgears...
It goes on to say that "the current approach executes the GPU driver alongside the complete Gallium3D software stack and the application code in one address space," which of course is undesirable for security, but also because it limits users to a single 3D client at a time.
I think what I want to do is:
- define an analogue of the Linux DRM API using Genode IPC,
- port the Linux kernel generic DRM layer and the driver for Intel integrated graphics to this IPC interface (as part of dde_linux I guess?),
- and port libdrm to the IPC interface.
I hope that the pointers above will serve you well as a suitable starting point.
I'm hoping that the libdrm abstraction layer is comprehensive enough that Mesa would not need much, if any, patching.
That it consistent with my experience. As far as I remember, I have not modified Mesa at all.
As you pointed out, I'd really like to wind up with a Wayland interface replacing Genode's Nitpicker. (Which is another wishlist item on the "challenges" page, I noticed.)
I do not think that the replacement of Nitpicker by something else is strictly necessary as Nitpicker and Wayland share the same principle architecture.
- There should probably be an intermediate resource management server
between the kernel/libdrm container and the app.
Agreed! In a complete implementation, something should keep track of how much video memory is available and share it fairly between clients. Bonus points if it also can provide a generic implementation of command scheduling, to keep any one client from starving other clients' access to the GPU.
I would refer to this component simply as "GPU driver". It would contain both, the actual driver code that talks to the GPU and the code for multiplexing the GPU. I think that, given the re-use of the Linux kernel code, it would be quite difficult to separate those both concerns into two distinct components.
That said, I'm hoping to get a single-client demo working without any resource management first. :-)
- You should think of whether you want to allow multiple clients to
access the same buffer simultaneously or make the access exclusive.
I think, to support the Wayland model, multiple clients need to be allowed to access the same buffer. But they shouldn't usually be trying to map the raw buffer contents into their local address space, right? That is a recipe for a performance disaster, especially on graphics cards with dedicated VRAM.
Buffer objects are mapped directly into the application's address spaces. This is also the case on Linux where a custom page-fault handler manages the part of the address space where the /dev/drm device node is mapped via mmap. The code (and the overloading of the mmap arguments with different semantics by the i915 driver) is quite frightening. But in principle, the construct could work very similar on Genode where are have a proper interface for managing (parts of) virtual address spaces from a remote component. On Genode, each buffer object would be represented as a dataspace. But let us keep this topic for later.
Have fun with you exploration! For giving the existing code a try, I would recommend you to test a slightly older Genode version (like 14.11) where the i915 GPU driver was still known to work.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
I can't give a definite answer on the Thinkpad compatibility, but it sounds like the i915 driver should be compatible with the Intel 945GM, and the CPU shouldn't be the problem. I might be wrong, but I don't think your problem lies there. As for the console output, there are instructions in the 13.05 release notes: http://genode.org/documentation/release-notes/13.05#Output_and_reset_with_In... Also, if you are able to get a serial port connection, be sure to use a null modem (crossover) cable. other serial cables will not work.
On Thu, Sep 3, 2015 at 3:31 AM, Jamey Sharp <jamey@...343...> wrote:
Thanks to your pointers, Norman, I've gotten as far as building Genode 14.11 for nova_x86_32 with the eglgears run script, with i915 added to SPECS.
I've booted the resulting .iso on a couple of Thinkpads from various eras, as well as in qemu of course. I don't actually get any gears rendering on any of them.
I don't expect it to work right under qemu since I don't think Intel integrated graphics is emulated there, but it's the only way I know to get debugging output so far. Under qemu, I see the following output:
[init -> launchpad -> init -> eglgears] native_probe* native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0 [init -> launchpad -> init -> eglgears] native_probe_result native_get_probe_result(native_probe*): not yet implemented [init -> launchpad -> init -> eglgears] falling back to softpipe driver [init -> launchpad -> init -> eglgears] returned from init display->screen [init -> launchpad -> init -> eglgears] no plugin found for fcntl(2) [init -> launchpad -> init -> eglgears] no plugin found for write(2) [init -> launchpad -> init -> eglgears] called, return 1 connector no RM attachment (READ pf_addr=9 pf_ip=481c4 from a2bfefc6 eglgears) virtual void Genode::Signal_session_component::submit(Genode::Signal_context_capability, unsigned int): invalid signal-context capability static void Genode::Pager_object::_page_fault_handler(): unhandled page fault, 'pager:eglgears' address=0x9 ip=0x481c4
So under qemu I guess eglgears crashes by dereferencing a bogus pointer. How can I get this console output on real hardware, to see if it's crashing the same way?
Also, Norman, do you remember exactly which hardware you tested this code on in 2010? I grabbed an old Thinkpad to try to match your setup more closely, so my test box has Intel 945GM graphics (PCI ID 8086:27a2) and a 32-bit Core Duo CPU. I may have gone a little too far back as it's a 2006 model, perhaps?
Jamey
On Mon, Aug 10, 2015 at 6:44 AM, Norman Feske <norman.feske@...1...> wrote:
Hello Jamey,
welcome to the list! Great that you are interested in picking up the GPU-related line of work.
I'd like to chime in because I conceived the original i915 GPU work 5 years ago.
On 10.08.2015 00:15, Jamey Sharp wrote:
I was referring to the Genode "challenges" list, which mentions that "Genode 10.08 introduced Gallium3D including the GPU driver for Intel GMA CPUs." (I'm guessing this has bit-rotted somewhat since then? I haven't found where that code might live yet.)
The state of my original port is roughly explained in the release notes of Genode 10.08:
http://genode.org/documentation/release-notes/10.08#Gallium3D_and_Intel%27s_...
We maintained this state until spring this year when we decided to abandon it until somehow becomes interested again. Now, just shortly after, you are showing up. ;-)
The code is still there but not regularly tested or maintained. The important pieces are:
- The port of the i915 GPU driver / the GEM subsystem of the Linux kernel. I ported the code via our DDE approach. But unlike all recent DDE-Linux-based drivers, the code resides in a separate repository:
https://github.com/genodelabs/linux_drivers/tree/master/src/drivers/gpu
We planned to add the revived version of this code to our new 'repos/dde_linux' repository within the Genode tree but haven't done so yet.
- The port of libdrm:
https://github.com/genodelabs/genode/blob/master/repos/libports/ports/libdrm...
https://github.com/genodelabs/genode/blob/master/repos/libports/lib/mk/libdr...
https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/libd...
As you can see in ioctl.cc, the code implements ioctl by simply calling the corresponding function of the GPU driver. Normally, we'd need to redirect those calls via RPC. But in my setup, I just co-located the GPU driver + libdrm + gallium3d + application within a single component.
Mesa / Gallium3d, which is part of 'repos/libports/'.
A custom EGL driver to interface Mesa with Genode:
https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/egl
- An example application and a corresponding run script:
https://github.com/genodelabs/genode/tree/master/repos/libports/src/app/eglg...
https://github.com/genodelabs/genode/blob/master/repos/libports/run/eglgears...
It goes on to say that "the current approach executes the GPU driver alongside the complete Gallium3D software stack and the application code in one address space," which of course is undesirable for security, but also because it limits users to a single 3D client at a time.
I think what I want to do is:
- define an analogue of the Linux DRM API using Genode IPC,
- port the Linux kernel generic DRM layer and the driver for Intel integrated graphics to this IPC interface (as part of dde_linux I guess?),
- and port libdrm to the IPC interface.
I hope that the pointers above will serve you well as a suitable starting point.
I'm hoping that the libdrm abstraction layer is comprehensive enough that Mesa would not need much, if any, patching.
That it consistent with my experience. As far as I remember, I have not modified Mesa at all.
As you pointed out, I'd really like to wind up with a Wayland interface replacing Genode's Nitpicker. (Which is another wishlist item on the "challenges" page, I noticed.)
I do not think that the replacement of Nitpicker by something else is strictly necessary as Nitpicker and Wayland share the same principle architecture.
- There should probably be an intermediate resource management server
between the kernel/libdrm container and the app.
Agreed! In a complete implementation, something should keep track of how much video memory is available and share it fairly between clients. Bonus points if it also can provide a generic implementation of command scheduling, to keep any one client from starving other clients' access to the GPU.
I would refer to this component simply as "GPU driver". It would contain both, the actual driver code that talks to the GPU and the code for multiplexing the GPU. I think that, given the re-use of the Linux kernel code, it would be quite difficult to separate those both concerns into two distinct components.
That said, I'm hoping to get a single-client demo working without any resource management first. :-)
- You should think of whether you want to allow multiple clients to
access the same buffer simultaneously or make the access exclusive.
I think, to support the Wayland model, multiple clients need to be allowed to access the same buffer. But they shouldn't usually be trying to map the raw buffer contents into their local address space, right? That is a recipe for a performance disaster, especially on graphics cards with dedicated VRAM.
Buffer objects are mapped directly into the application's address spaces. This is also the case on Linux where a custom page-fault handler manages the part of the address space where the /dev/drm device node is mapped via mmap. The code (and the overloading of the mmap arguments with different semantics by the i915 driver) is quite frightening. But in principle, the construct could work very similar on Genode where are have a proper interface for managing (parts of) virtual address spaces from a remote component. On Genode, each buffer object would be represented as a dataspace. But let us keep this topic for later.
Have fun with you exploration! For giving the existing code a try, I would recommend you to test a slightly older Genode version (like 14.11) where the i915 GPU driver was still known to work.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Monitor Your Dynamic Infrastructure at Any Scale With Datadog! Get real-time metrics from all of your servers, apps and tools in one place. SourceForge users - Click here to start your Free Trial of Datadog now! http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140 _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Jamey,
On 03.09.2015 05:31, Jamey Sharp wrote:
Thanks to your pointers, Norman, I've gotten as far as building Genode 14.11 for nova_x86_32 with the eglgears run script, with i915 added to SPECS.
I've booted the resulting .iso on a couple of Thinkpads from various eras, as well as in qemu of course. I don't actually get any gears rendering on any of them.
I don't expect it to work right under qemu since I don't think Intel integrated graphics is emulated there, but it's the only way I know to get debugging output so far. Under qemu, I see the following output:
[init -> launchpad -> init -> eglgears] native_probe* native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0 [init -> launchpad -> init -> eglgears] native_probe_result native_get_probe_result(native_probe*): not yet implemented [init -> launchpad -> init -> eglgears] falling back to softpipe driver [init -> launchpad -> init -> eglgears] returned from init display->screen [init -> launchpad -> init -> eglgears] no plugin found for fcntl(2) [init -> launchpad -> init -> eglgears] no plugin found for write(2) [init -> launchpad -> init -> eglgears] called, return 1 connector no RM attachment (READ pf_addr=9 pf_ip=481c4 from a2bfefc6 eglgears) virtual void Genode::Signal_session_component::submit(Genode::Signal_context_capability, unsigned int): invalid signal-context capability static void Genode::Pager_object::_page_fault_handler(): unhandled page fault, 'pager:eglgears' address=0x9 ip=0x481c4
So under qemu I guess eglgears crashes by dereferencing a bogus pointer. How can I get this console output on real hardware, to see if it's crashing the same way?
thank you for the log. I can see two things: First, eglgears fails to probe for the Intel GMA PCI device, which is just expected as you are executing the run script on Qemu. If falls back to the softpipe driver. The second issue indeed looks like a de-referenced NULL pointer. I can reproduce it with the current master branch of Genode. It even occurs on Linux, where I was able to obtain a backtrace. I just created the following issue for it:
https://github.com/genodelabs/genode/issues/1670
While investigating, I tried various prior versions of Genode, including the versions 14.11 and 14.05. Interestingly, all the tested versions fail in the same way. However, once I installed the old tool chain (the one built by the tool/tool_chain script for the respective version), things started to look a bit brighter. E.g., with Genode 14.05 and the tool chain of the this version, the eglgears.run script works on Qemu (on NOVA, I had to increase the memory assignment in the launchpad a bit). I haven't tested it on real hardware though.
So the issue seems somehow to be related to the tool chain. Unfortunately, the backtrace does not point to an obvious problem. We'll need to investigate.
In the meanwhile, could you try to give Genode 14.05 (with the corresponding tool chain) a spin?
Regarding your question about obtaining the log output on real hardware, you have two options:
* Use a machine with Intel Active Management Technology (AMT), which allows you to transfer the serial output over the network.
* Install a serial ExpressCard (if your laptop has such a slot).
Our run tool supports both options.
Also, Norman, do you remember exactly which hardware you tested this code on in 2010? I grabbed an old Thinkpad to try to match your setup more closely, so my test box has Intel 945GM graphics (PCI ID 8086:27a2) and a 32-bit Core Duo CPU. I may have gone a little too far back as it's a 2006 model, perhaps?
The machine looks fine. I successfully tested the eglgears.run script on a 945GM device.
Cheers Norman
Hooray, eglgears runs on this old Thinkpad when I build it from Genode 14.05 and the 12.11 toolchain! Until I get a serial console I guess I can't tell for sure whether it's using softpipe or hardware acceleration, though.
Based on the stack trace you got, I debugged the problem with eglgears on the newer toolchain (it attempted to pass statically allocated data to free()) and included a trivial patch in the GitHub issue you opened.
Applying that patch makes the eglgears demo work under qemu on Genode 14.11, but the window is still black on my test machine. I guess I need a serial console to troubleshoot that, too.
My current test laptop has a CardBus slot but not ExpressCard, as far as I can tell; does that matter? There's nothing in its BIOS about turning on AMT so I'm guessing it doesn't have that. I'm trying to scrounge up other laptops now with either of those options and sufficiently old Intel graphics.
Jamey
On Thu, Sep 3, 2015 at 8:59 AM, Norman Feske <norman.feske@...1...> wrote:
Hi Jamey,
On 03.09.2015 05:31, Jamey Sharp wrote:
Thanks to your pointers, Norman, I've gotten as far as building Genode 14.11 for nova_x86_32 with the eglgears run script, with i915 added to SPECS.
I've booted the resulting .iso on a couple of Thinkpads from various eras, as well as in qemu of course. I don't actually get any gears rendering on any of them.
I don't expect it to work right under qemu since I don't think Intel integrated graphics is emulated there, but it's the only way I know to get debugging output so far. Under qemu, I see the following output:
[init -> launchpad -> init -> eglgears] native_probe* native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0 [init -> launchpad -> init -> eglgears] native_probe_result native_get_probe_result(native_probe*): not yet implemented [init -> launchpad -> init -> eglgears] falling back to softpipe driver [init -> launchpad -> init -> eglgears] returned from init display->screen [init -> launchpad -> init -> eglgears] no plugin found for fcntl(2) [init -> launchpad -> init -> eglgears] no plugin found for write(2) [init -> launchpad -> init -> eglgears] called, return 1 connector no RM attachment (READ pf_addr=9 pf_ip=481c4 from a2bfefc6 eglgears) virtual void Genode::Signal_session_component::submit(Genode::Signal_context_capability, unsigned int): invalid signal-context capability static void Genode::Pager_object::_page_fault_handler(): unhandled page fault, 'pager:eglgears' address=0x9 ip=0x481c4
So under qemu I guess eglgears crashes by dereferencing a bogus pointer. How can I get this console output on real hardware, to see if it's crashing the same way?
thank you for the log. I can see two things: First, eglgears fails to probe for the Intel GMA PCI device, which is just expected as you are executing the run script on Qemu. If falls back to the softpipe driver. The second issue indeed looks like a de-referenced NULL pointer. I can reproduce it with the current master branch of Genode. It even occurs on Linux, where I was able to obtain a backtrace. I just created the following issue for it:
https://github.com/genodelabs/genode/issues/1670
While investigating, I tried various prior versions of Genode, including the versions 14.11 and 14.05. Interestingly, all the tested versions fail in the same way. However, once I installed the old tool chain (the one built by the tool/tool_chain script for the respective version), things started to look a bit brighter. E.g., with Genode 14.05 and the tool chain of the this version, the eglgears.run script works on Qemu (on NOVA, I had to increase the memory assignment in the launchpad a bit). I haven't tested it on real hardware though.
So the issue seems somehow to be related to the tool chain. Unfortunately, the backtrace does not point to an obvious problem. We'll need to investigate.
In the meanwhile, could you try to give Genode 14.05 (with the corresponding tool chain) a spin?
Regarding your question about obtaining the log output on real hardware, you have two options:
Use a machine with Intel Active Management Technology (AMT), which allows you to transfer the serial output over the network.
Install a serial ExpressCard (if your laptop has such a slot).
Our run tool supports both options.
Also, Norman, do you remember exactly which hardware you tested this code on in 2010? I grabbed an old Thinkpad to try to match your setup more closely, so my test box has Intel 945GM graphics (PCI ID 8086:27a2) and a 32-bit Core Duo CPU. I may have gone a little too far back as it's a 2006 model, perhaps?
The machine looks fine. I successfully tested the eglgears.run script on a 945GM device.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Monitor Your Dynamic Infrastructure at Any Scale With Datadog! Get real-time metrics from all of your servers, apps and tools in one place. SourceForge users - Click here to start your Free Trial of Datadog now! http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140 _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Jamey,
On 03.09.2015 23:22, Jamey Sharp wrote:
Hooray, eglgears runs on this old Thinkpad when I build it from Genode 14.05 and the 12.11 toolchain! Until I get a serial console I guess I can't tell for sure whether it's using softpipe or hardware acceleration, though.
Based on the stack trace you got, I debugged the problem with eglgears on the newer toolchain (it attempted to pass statically allocated data to free()) and included a trivial patch in the GitHub issue you opened.
congratulations, and thanks for the fix! :-)
My current test laptop has a CardBus slot but not ExpressCard, as far as I can tell; does that matter?
Yes, as long as the serial device appears as a PCI device, you are fine. You can find this topic discussed in Section "7.7.3. Log output on modern PC hardware" in our documentation [1].
[1] http://genode.org/documentation/genode-foundations-15-05.pdf
There's nothing in its BIOS about turning on AMT so I'm guessing it doesn't have that. I'm trying to scrounge up other laptops now with either of those options and sufficiently old Intel graphics.
Btw, in the meantime, I have started to work on porting the Intel KMS driver from Linux 3.14.5 to Genode. The code in Linux' drm/i915/ changed substantially since my original GPU experiments. So I am building a fresh driver environment for the new version rather than attempting to update the five-year-old driver. However, right now, I am focusing on the video-mode-setting side of things, not the GPU.
For this work, I am using a refurbished Thinkpad x201 as test machine. It is fairly cheap, has Intel graphics, and is equipped with both Intel AMT and an Express-card slot.
Cheers Norman
Hi Norman!
On Fri, Sep 4, 2015 at 2:09 AM, Norman Feske <norman.feske@...1...> wrote:
On 03.09.2015 23:22, Jamey Sharp wrote:
My current test laptop has a CardBus slot but not ExpressCard, as far as I can tell; does that matter?
Yes, as long as the serial device appears as a PCI device, you are fine. You can find this topic discussed in Section "7.7.3. Log output on modern PC hardware" in our documentation [1].
[1] http://genode.org/documentation/genode-foundations-15-05.pdf
I've switched to a ThinkPad X220 and enabled AMT on it, but I haven't been able to get amtterm to work. amttool successfully resets the machine, and I can use the AMT web interface, but amtterm consistently reports:
amtterm: NONE -> CONNECT (connection to host) ipv4 10.0.0.1 [10.0.0.1] 16994 connect: Connection timed out amtterm: CONNECT -> ERROR (failure)
How can I troubleshoot this?
Btw, in the meantime, I have started to work on porting the Intel KMS driver from Linux 3.14.5 to Genode. The code in Linux' drm/i915/ changed substantially since my original GPU experiments. So I am building a fresh driver environment for the new version rather than attempting to update the five-year-old driver. However, right now, I am focusing on the video-mode-setting side of things, not the GPU.
Cool! What's your timeline for that work? I'm going to be at the X.Org Developers' Conference, September 16th-18th, and would like to give an informal presentation on the state of graphics drivers in Genode. If you have stuff I can show off for you by then, I'd be delighted to.
Jamey
Hi Jamey,
Did you enable SOL/IDER in the AMT configuration settings? The config menu isn't part of the BIOS but can be accessed via CTRL-P on boot or after pressing the blue Think* key. I'm not sure about the exact procedure for x220 but it works very similar on all Thinkpad X/T notebook models.
Regards Christian
Am 4. September 2015 21:04:14 MESZ, schrieb Jamey Sharp <jamey@...343...>:
Hi Norman!
On Fri, Sep 4, 2015 at 2:09 AM, Norman Feske <norman.feske@...1...> wrote:
On 03.09.2015 23:22, Jamey Sharp wrote:
My current test laptop has a CardBus slot but not ExpressCard, as
far
as I can tell; does that matter?
Yes, as long as the serial device appears as a PCI device, you are
fine.
You can find this topic discussed in Section "7.7.3. Log output on modern PC hardware" in our documentation [1].
[1] http://genode.org/documentation/genode-foundations-15-05.pdf
I've switched to a ThinkPad X220 and enabled AMT on it, but I haven't been able to get amtterm to work. amttool successfully resets the machine, and I can use the AMT web interface, but amtterm consistently reports:
amtterm: NONE -> CONNECT (connection to host) ipv4 10.0.0.1 [10.0.0.1] 16994 connect: Connection timed out amtterm: CONNECT -> ERROR (failure)
How can I troubleshoot this?
Btw, in the meantime, I have started to work on porting the Intel KMS driver from Linux 3.14.5 to Genode. The code in Linux' drm/i915/
changed
substantially since my original GPU experiments. So I am building a fresh driver environment for the new version rather than attempting
to
update the five-year-old driver. However, right now, I am focusing on the video-mode-setting side of things, not the GPU.
Cool! What's your timeline for that work? I'm going to be at the X.Org Developers' Conference, September 16th-18th, and would like to give an informal presentation on the state of graphics drivers in Genode. If you have stuff I can show off for you by then, I'd be delighted to.
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Christian!
On Sep 4, 2015 12:36 PM, "Christian Helmuth" <christian.helmuth@...1...> wrote:
Did you enable SOL/IDER in the AMT configuration settings?
I've checked a few times now; SOL is enabled and IDER is enabled. So I'm baffled.
The config menu isn't part of the BIOS but can be accessed via CTRL-P on boot or after pressing the blue Think* key. I'm not sure about the exact procedure for x220 but it works very similar on all Thinkpad X/T notebook models.
Yeah, that surprised me. At least on this laptop, I have to hit the blue "ThinkVantage" button before it'll let me hit Ctrl-P to reach the AMT options. The Ctrl-P shortcut doesn't seem to work at any other time, contrary to what various people on the Internet have written.
Jamey
Thinking about it again, I had a similar issue with an x205 recently. The solution w
Am 4. September 2015 22:18:41 MESZ, schrieb Jamey Sharp <jamey@...343...>:
Hi Christian!
On Sep 4, 2015 12:36 PM, "Christian Helmuth" <christian.helmuth@...1...> wrote:
Did you enable SOL/IDER in the AMT configuration settings?
I've checked a few times now; SOL is enabled and IDER is enabled. So I'm baffled.
The config menu isn't part of the BIOS but can be accessed via CTRL-P
on boot or after pressing the blue Think* key. I'm not sure about the exact procedure for x220 but it works very similar on all Thinkpad X/T notebook models.
Yeah, that surprised me. At least on this laptop, I have to hit the blue "ThinkVantage" button before it'll let me hit Ctrl-P to reach the AMT options. The Ctrl-P shortcut doesn't seem to work at any other time, contrary to what various people on the Internet have written.
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Jamey,
Sorry for the incomplete mail - I can't imagine why K9-Mail decided to send it in the middle of typing...
Thinking about it again, I had a similar issue with an Thinkpad X250 recently. The solution was to enable the IDER listener via the wsman tool. My command line was
wsman put http://intel.com/wbem/wscim/1/amt-schema/1/AMT_RedirectionService \ -h <host> -P 16992 -k ListenerEnabled=true
Maybe this also works for you.
Greets Christian
Am 4. September 2015 22:18:41 MESZ, schrieb Jamey Sharp <jamey@...343...>:
Hi Christian!
On Sep 4, 2015 12:36 PM, "Christian Helmuth" <christian.helmuth@...1...> wrote:
Did you enable SOL/IDER in the AMT configuration settings?
I've checked a few times now; SOL is enabled and IDER is enabled. So I'm baffled.
The config menu isn't part of the BIOS but can be accessed via CTRL-P
on boot or after pressing the blue Think* key. I'm not sure about the exact procedure for x220 but it works very similar on all Thinkpad X/T notebook models.
Yeah, that surprised me. At least on this laptop, I have to hit the blue "ThinkVantage" button before it'll let me hit Ctrl-P to reach the AMT options. The Ctrl-P shortcut doesn't seem to work at any other time, contrary to what various people on the Internet have written.
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hey Christian, your suggestion reminded me that there was an option in the AMT configuration called "Legacy Redirection Mode". Turning that on made amtterm work--hooray!
Which means now I can see that on this X220, eglgears is falling back to softpipe--boo. :-( In hindsight, I suppose Sandybridge graphics is too new for the driver from 2010.
Now back to scrounging for more laptops to test on... Jamey
On Fri, Sep 4, 2015 at 1:58 PM, Christian Helmuth <christian.helmuth@...1...> wrote:
Jamey,
Sorry for the incomplete mail - I can't imagine why K9-Mail decided to send it in the middle of typing...
Thinking about it again, I had a similar issue with an Thinkpad X250 recently. The solution was to enable the IDER listener via the wsman tool. My command line was
wsman put http://intel.com/wbem/wscim/1/amt-schema/1/AMT_RedirectionService \ -h <host> -P 16992 -k ListenerEnabled=true
Maybe this also works for you.
Greets Christian
Am 4. September 2015 22:18:41 MESZ, schrieb Jamey Sharp <jamey@...343...>:
Hi Christian!
On Sep 4, 2015 12:36 PM, "Christian Helmuth" <christian.helmuth@...1...> wrote:
Did you enable SOL/IDER in the AMT configuration settings?
I've checked a few times now; SOL is enabled and IDER is enabled. So I'm baffled.
The config menu isn't part of the BIOS but can be accessed via CTRL-P
on boot or after pressing the blue Think* key. I'm not sure about the exact procedure for x220 but it works very similar on all Thinkpad X/T notebook models.
Yeah, that surprised me. At least on this laptop, I have to hit the blue "ThinkVantage" button before it'll let me hit Ctrl-P to reach the AMT options. The Ctrl-P shortcut doesn't seem to work at any other time, contrary to what various people on the Internet have written.
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
-- Christian Helmuth Genode Labs
http://www.genode-labs.com/ · http://genode.org/ · /ˈdʒiː.nəʊd/
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Alright, I've found a Dell Latitude D620, which has a physical serial port and Intel 945 integrated graphics. The eglgears demo runs when built from Genode 14.05, and I can see from the console output that it's hardware accelerated. So that's great!
My guess is that the fastest way I can prototype separating the driver into its own address space is to modify the old linux_drivers code, rather than trying to port the modern Intel driver from a current Linux kernel release. But I'd prefer to work against Genode master, if that isn't too much work to throw at a prototype.
What would it take to forward-port the old linux_drivers code from 2010 into dde_linux on current Genode master? (Or should I just build a throwaway demo against 14.05 to prove the concept, and help with porting the modern Intel driver later?)
As one experiment in that direction, I tried running eglgears built from Genode 14.11, which I guess was the last release that had dde_kit. Even with the patch to not free a static array, it opens a blank window. The console output stops after these messages:
[init -> launchpad -> init -> eglgears] native_probe* native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0 [init -> launchpad -> init -> eglgears] native_probe_result native_get_probe_result(native_probe*): not yet implemented [init -> launchpad -> init -> nit_fb] using xywh=(300,300,576,408)
On the 14.05 release, where the demo works, the output continues past that. It starts with:
[init -> launchpad -> init -> nit_fb] using xywh=(300,100,576,408) refresh_rate=0 [init -> launchpad -> init -> eglgears] native_probe* native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0 [init -> launchpad -> init -> eglgears] native_probe_result native_get_probe_result(native_probe*): not yet implemented [init -> launchpad -> init -> eglgears] I915_gpu_driver::I915_gpu_driver(): module_agp_intel_init returned 0, driver at 14e27a0 [init -> launchpad -> init -> eglgears] dev_info: Intel 945GM Chipset [init -> launchpad -> init -> eglgears] dev_info: detected 7932K stolen memory [init -> launchpad -> init -> eglgears] dev_info: AGP aperture is 256M @ 0xd0000000 [init -> launchpad -> init -> eglgears] I915_gpu_driver::I915_gpu_driver(): call drm_agp_init
Plus another 20 lines or so. Any idea why it's failing? Jamey
On Fri, Sep 4, 2015 at 4:30 PM, Jamey Sharp <jamey@...343...> wrote:
Hey Christian, your suggestion reminded me that there was an option in the AMT configuration called "Legacy Redirection Mode". Turning that on made amtterm work--hooray!
Which means now I can see that on this X220, eglgears is falling back to softpipe--boo. :-( In hindsight, I suppose Sandybridge graphics is too new for the driver from 2010.
Now back to scrounging for more laptops to test on... Jamey
On Fri, Sep 4, 2015 at 1:58 PM, Christian Helmuth <christian.helmuth@...1...> wrote:
Jamey,
Sorry for the incomplete mail - I can't imagine why K9-Mail decided to send it in the middle of typing...
Thinking about it again, I had a similar issue with an Thinkpad X250 recently. The solution was to enable the IDER listener via the wsman tool. My command line was
wsman put http://intel.com/wbem/wscim/1/amt-schema/1/AMT_RedirectionService \ -h <host> -P 16992 -k ListenerEnabled=true
Maybe this also works for you.
Greets Christian
Am 4. September 2015 22:18:41 MESZ, schrieb Jamey Sharp <jamey@...343...>:
Hi Christian!
On Sep 4, 2015 12:36 PM, "Christian Helmuth" <christian.helmuth@...1...> wrote:
Did you enable SOL/IDER in the AMT configuration settings?
I've checked a few times now; SOL is enabled and IDER is enabled. So I'm baffled.
The config menu isn't part of the BIOS but can be accessed via CTRL-P
on boot or after pressing the blue Think* key. I'm not sure about the exact procedure for x220 but it works very similar on all Thinkpad X/T notebook models.
Yeah, that surprised me. At least on this laptop, I have to hit the blue "ThinkVantage" button before it'll let me hit Ctrl-P to reach the AMT options. The Ctrl-P shortcut doesn't seem to work at any other time, contrary to what various people on the Internet have written.
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
-- Christian Helmuth Genode Labs
http://www.genode-labs.com/ · http://genode.org/ · /ˈdʒiː.nəʊd/
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Jamey,
On 05.09.2015 03:01, Jamey Sharp wrote:
Alright, I've found a Dell Latitude D620, which has a physical serial port and Intel 945 integrated graphics. The eglgears demo runs when built from Genode 14.05, and I can see from the console output that it's hardware accelerated. So that's great!
great that you got it working! I admire your perseverance. :-)
My guess is that the fastest way I can prototype separating the driver into its own address space is to modify the old linux_drivers code, rather than trying to port the modern Intel driver from a current Linux kernel release. But I'd prefer to work against Genode master, if that isn't too much work to throw at a prototype.
What would it take to forward-port the old linux_drivers code from 2010 into dde_linux on current Genode master? (Or should I just build a throwaway demo against 14.05 to prove the concept, and help with porting the modern Intel driver later?)
Forward-porting the old driver involves the following aspects:
* Replacing the code that relies on the no-longer-available DDE Kit (in lx_emul.cc) or reverting the commit that removed DDE-Kit.
* Adjusting the driver to use the new platform driver instead of the old PCI driver. E.g., now, after the transition to the new platform driver, we allocate memory for DMA buffers at the platform driver instead of talking to core's RAM service. To see how this works, I recommend looking into the drivers in the dde_linux repository.
* Change repos/libports/src/lib/egl/select_driver.cc to return the name of the driver library ("gallium-i915.lib.so"). The original probing code used to scan the PCI bus for the supported Intel GMA device IDs but we removed the code some months ago. For your work, I would drop the probing and just hard-wire the driver name.
Regarding your question on how to proceed, it depends on your priorities. If you want to work on the actual problem of splitting the driver from the application right away, I'd recommend to base your work on 14.05. In the meanwhile, someone at our team could look into the forward-porting of the old driver. So you wouldn't need to waste your time with following the recent history of our platform driver. On the other hand, if you are eager to learn more about Genode architecturally-wise, getting your hands dirty with the forward-porting work might provide you with insights about Genode.
As one experiment in that direction, I tried running eglgears built from Genode 14.11, which I guess was the last release that had dde_kit. Even with the patch to not free a static array, it opens a blank window. The console output stops after these messages:
At that time, we transitioned to our new dynamic linker. I remember that we had a few remaining dynamic-linking issues even after 14.11, which may have affected the eglgears scenario. With the current master branch and your fix (in issue 1670), the eglgears.run script works well using the softpipe driver. I won't bother with 14.11 at this point.
Cheers Norman
Hi Jamey,
Btw, in the meantime, I have started to work on porting the Intel KMS driver from Linux 3.14.5 to Genode. The code in Linux' drm/i915/ changed substantially since my original GPU experiments. So I am building a fresh driver environment for the new version rather than attempting to update the five-year-old driver. However, right now, I am focusing on the video-mode-setting side of things, not the GPU.
Cool! What's your timeline for that work? I'm going to be at the X.Org Developers' Conference, September 16th-18th, and would like to give an informal presentation on the state of graphics drivers in Genode. If you have stuff I can show off for you by then, I'd be delighted to.
thanks for spreading the word! :-)
http://phoronix.com/scan.php?page=news_item&px=GPU-Microkernel-Support
BTW, you can find my current line of i915-related work on the following branch:
https://github.com/nfeske/genode/commits/intel_kms
Even though I got most of the driver to compile and the basic initializations (like detecting the device revision, obtaining I/O resources etc.) are done, there is still significant work to do. Right now, the driver manages to switch off the panel of my test machine and tries to squeeze some EDID information out of the panel, yay!
At present, Stefan has taken over the torch while I am busy with other work. If you are curious, you may give the current state a try (via the dde_linux/run/intel_fb.run script), or even lend a helping hand. But be cautioned: It is very rough around the edges.
Cheers Norman
Hi Norman!
On Wed, Sep 23, 2015 at 5:08 PM, Norman Feske <norman.feske@...1...> wrote:
thanks for spreading the word! :-)
http://phoronix.com/scan.php?page=news_item&px=GPU-Microkernel-Support
The video of my talk just went up, so you can tell me if I said anything too silly. :-)
https://www.youtube.com/watch?v=FpbUMMguGEA
Judging by feedback I got after the talk, I at least succeeded in getting a few people to think about microkernels who hadn't before.
Jamey
On 10/03/2015 03:26 PM, Jamey Sharp wrote:
Hi Norman!
On Wed, Sep 23, 2015 at 5:08 PM, Norman Feske <norman.feske@...1...> wrote:
thanks for spreading the word! :-)
http://phoronix.com/scan.php?page=news_item&px=GPU-Microkernel-Support
The video of my talk just went up, so you can tell me if I said anything too silly. :-)
https://www.youtube.com/watch?v=FpbUMMguGEA
Judging by feedback I got after the talk, I at least succeeded in getting a few people to think about microkernels who hadn't before.
I liked the "Right now we got crap everywhere. And that doesn't mean, that we shouldn't try to fix some of the crap." part.
Thanks for sharing this, too bad the demo did not work, but we are at it right now,.
Sebastian