PCI memory configuration

Christian Helmuth christian.helmuth at ...1...
Mon Aug 10 12:09:44 CEST 2009


Hi,

On Fri, Aug 07, 2009 at 01:50:31PM +0200, Frank Kaiser wrote:
> It looks as if we have a serious problem with the PCI device driver. The
> driver just takes over the PCI memory configuration from the BIOS, and
> it turns out that the current BIOS of the IVI platform assigns memory
> block with a minimum address increment of 1 kB.

You're right. Currently, the PCI driver doesn't touch the
preconfigured PCI resources.

> The outcome is that the register addresses of the 3 SDHCI
> controllers and the EHCI controller reside in one 4 kB page. Since
> Genode seems to always assign one page to each device, it only can
> handle the first controller found. Trying to allocate resources to
> the next controllers results in errors:
> 
> Genode::Io_mem_session_component::Dataspace_attr
> Genode::Io_mem_session_component::_prepare_io_mem(const char*,
> Genode::Range_allocator*): I/O memory [bfe57000,bfe58000) not available
> 
> Genode::Io_mem_session_component::Io_mem_session_component(Genode::Range
> _allocator*, Genode::Range_allocator*, Genode::Server_entrypoint*, const
> char*): Local MMIO mapping failed!
> 
> [init -> test-dde_os_linux26_mmc] request_mem_region() failed (start
> bfe57400, size 100)<4>PCI: Unable to reserve mem region #1:100 at ...26...
> for device 0000:00:0a.0
> 
> The PCI memoty map is as follows:
[...]
> For the time being we require only one SDHCI controller so that we could
> omit the initialisation over the other two, but this still leaves the
> problem with the conflicting address assignment of the EHCI controller.
> I see only two solutions:
> 
> 1.       Make Genode dealing with PCI memory resources smaller than 4
> kB.

I think we'll not walk this path until Intel supports 1KB pages and I
hope you agree.

> 2.       If #1 is not affordable, let the PCI driver reassign the memory
> resources with a minimum increment of 4 kB.

A clever resource-region assignment in the PCI driver sounds promising
for platforms as the IVI, but (from my past experience) will not
always work as you expect. The last time I laid my hands on an Intel
board with this "shortcoming" the PCI regions still shared the same
physical page after reconfiguration of the resources. Please give it a
try and write your experiences.

IMO you have two choices:

1) Put all drivers using the same I/O memory page into one address
   space. There's no point in isolation on process level if all
   isolated processes have full access to the hardware devices.

2) Wrap the IO_MEM service in the parent process of the driver
   processes. If a driver creates an IO_MEM session provide him with
   a capability to your local server and hand out the real I/O memory
   dataspace capability (multiple times) on demand. You have to
   implement the Io_mem_session_server interface in your parent for
   this solution.

Hope this helps
-- 
Christian Helmuth
Genode Labs

http://www.genode-labs.com/ ยท http://genode.org/




More information about the users mailing list