Dear Genode developers,
Following Martijn's mail at the end of last year, which stated:
As a way to support detachment / reattachment of USB storage I’m thinking about placing the rump_fs and part_blk components in a child subtree of the CLI component that is spawned on demand and cleaned-up after use. But this seems a bit like overkill.
That's exactly the right solution. I don't think that it's overkill either. Spawning up rump_fs and part_blk dynamically is certainly quick enough. Memory-wise, it does not take more resources that a static scenario either. By letting your CLI component implement the protocol outlined above, you have full control over chain of events. Also the aborting rump_fs is nothing fatal anymore but can be gracefully handled by the CLI component. As another benefit, the solution does not need us to supplement the notion of hot-plugging to the file-system and block session interfaces, which would otherwise inflate the complexity of these interfaces (and thereby all the clients that rely on them).
Martijn and I have been thinking of a way to implement this, and came to the conclusion that instead of spawning the stack as children of the CLI component, it might be better to use a new management component in-between. As Josef said earlier:
A custom runtime/management component could monitor the usb_drv device report and spawn the whole stack if it detects a USB storage device. The usb_drv's device report does not contain the device class so far though but adding that to the report is easy.
This is exactly what we're trying to do now. We want to create a custom component called "media" that monitors usb devices by reading the report. It provides a service to other components through which they can request a filesystem session in order to read-write from/to the usb-stick. For this, it spawns the part_blk and rump_fs components as children if the usb is plugged in, and kills them once the usb is plugged out. It roughly looks like this:
rump_fs part_blk
| |
CLI media USB_drv
| | |
init
But this raises a few questions. First, the filesystem interface needs to be presented to the client somehow. To avoid adding another layer of indirection into media, essentially duplicating rump_fs's entire API, we would like the client (in this case CLI) to be directly connected to rump_fs. The client can then ask media if the USB is connected before calling a function from rump_fs.
However, this means that rump_fs provides a service, announces it to its parent (media), and media has to decide what to do with that announce. It can implement rump_fs as a slave, but that way the entire API needs to be copied into media so media can present it as its own service to the client.
So we would like media to announce the filesystem service to its parent (in this case init), so any client can use this service. In the same way, any session request will be passed from CLI to its parent (init), init has to pass it to media, and media passes it to rump_fs. However, the current implementation and specification of genode does not allow services of any server to exist in any level above the server's parent. Services can only be provided to direct parents, and to other components in the parent's subtree. Therefore, copying the API from the child to the parent seems unavoidable.
Another problem that pops up is that media has to spawn all these subcomponents as children. In order to route block session requests from rump-fs to part-blk, media needs to implement some routing policy and effectively serves the same role for these two components as init serves for the system. So we could:
1. Copy all necessary code for routing from init to media (which is almost all code if we want to be generic).
2. Let media spawn another init child component (let's call it sub-init for now) which in turn spawns rump-fs and part-blk and does the routing.
To us, the second option seems much more clean as it involves no code-copying. However, services announced by rump-fs can not be used by other components that are not children of the new init, and are kind of useless. Their announcements can not be passed on to the parents, leaving us with the same problem as we had with rump_fs but with the additional problem that even if there would be a custom way to forward service announces and requests to the parent/child respectively, sub-init has no such policy, and this functionality has to be included in sub-init's code as well, adding a lot of complexity.
Eventually both cases bottle down to the same problem: service announcements to a parent cannot automatically be forwarded to that parent's parent, and likewise, service requests need to be able to be delegated to children of children without a lot of hassle. The only option if I'm correct is implementing this functionality manually, but this does not work if the parent is an existing component that does not support it.
Is there a reason this is never done? For init it is clear that it would never pass an announce to its parent (usually core) or receive session requests from it. But how about the general case?
And how should we solve cases such as the above scenario?
kind regards,
Boris Mulder
Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands +31 35 631 3253 (office)
Hello Boris,
welcome to the mailing list and thank you for the elaborate description of your scenario and approach.
As a side note, the discussion reminds me of a very similar problem we addressed some years ago:
http://genode.org/documentation/release-notes/12.02#Device_drivers
Unfortunately, we removed the described d3m component later on because it turned out to be not as flexible as we hoped for. However, on the positive side, scenarios like your's are not completely alien to Genode. ;-)
This is exactly what we're trying to do now. We want to create a custom component called "media" that monitors usb devices by reading the report. It provides a service to other components through which they can request a filesystem session in order to read-write from/to the usb-stick. For this, it spawns the part_blk and rump_fs components as children if the usb is plugged in, and kills them once the usb is plugged out. It roughly looks like this:
rump_fs part_blk
| |
CLI media USB_drv
| | |
init
This looks very good to me.
But this raises a few questions. First, the filesystem interface needs to be presented to the client somehow. To avoid adding another layer of indirection into media, essentially duplicating rump_fs's entire API, we would like the client (in this case CLI) to be directly connected to rump_fs. The client can then ask media if the USB is connected before calling a function from rump_fs.
You are right that wrapping the 'File_system' interface would be cumbersome. In your case, it is better to let CLI use the rump_fs-provided session directly. This can be achieved by letting the media component pass the session capability as obtained from rump_fs to its parent (init). So CLI would use the rump_fs session directly.
However, this means that rump_fs provides a service, announces it to its parent (media), and media has to decide what to do with that announce. It can implement rump_fs as a slave, but that way the entire API needs to be copied into media so media can present it as its own service to the client.
You are already on the right track. Running rump_fs as a slave is good. You just missed a tiny piece of the puzzle: The 'Slave::Connection' does not only provide the session interface of the slave's service but also the corresponding 'Session_capability' (it inherits 'CONNECTION::Client', so the 'Slave::Connection' _is_ a session capability). Instead of calling the 'File_system' methods, the media component would pass this 'Session_capability' to init as response to the 'File_system' session request that originated from init.
Services can only be provided to direct parents, and to other components in the parent's subtree. Therefore, copying the API from the child to the parent seems unavoidable.
There is no such limitation. But you are right that the use case has been so rare that it is near to impossible to find examples in Genode's source tree. The above mentioned d3m was such an example. Other examples are the GDB monitor (however, here we temporarily removed the feature to run Genode services within GDB monitor).
Another problem that pops up is that media has to spawn all these subcomponents as children. In order to route block session requests from rump-fs to part-blk, media needs to implement some routing policy and effectively serves the same role for these two components as init serves for the system. So we could:
- Copy all necessary code for routing from init to media (which is
almost all code if we want to be generic).
- Let media spawn another init child component (let's call it sub-init
for now) which in turn spawns rump-fs and part-blk and does the routing.
To us, the second option seems much more clean as it involves no code-copying. However, services announced by rump-fs can not be used by other components that are not children of the new init, and are kind of useless. Their announcements can not be passed on to the parents, leaving us with the same problem as we had with rump_fs but with the additional problem that even if there would be a custom way to forward service announces and requests to the parent/child respectively, sub-init has no such policy, and this functionality has to be included in sub-init's code as well, adding a lot of complexity.
I agree with everything you said. Until Genode 16.11 is was not reasonable for init to forward session requests to its children because of the synchronous nature of the parent interface. Now that we revised this interface to work asynchronously [1], we can move forward and add this feature to init. Indeed, I plan to add it along with the dynamic reconfiguability of init in the near-term future (as outlined in my original road-map posting [2]). With the new version of init, scenarios like your's will become pretty straight-forward to realize.
[1] http://genode.org/documentation/release-notes/16.11#Asynchronous_parent-chil... [2] https://sourceforge.net/p/genode/mailman/genode-main/thread/585A6FE2.1060800...
And how should we solve cases such as the above scenario?
In the not-too-distant future, your case should be well covered by init, alleviating the need to implement a custom runtime component. In the meantime, I recommend you to follow the slave approach described above (forwarding the session capability of the 'Slave::Connection' to init).
I would be very interested to hear how this turns out. Should my above description remain too vague or leave your questions unanswered, please don't hesitate to get back to me.
Cheers Norman
I have been looking into your suggestions and I have some questions about it.
On 13-01-17 11:32, Norman Feske wrote:
You are already on the right track. Running rump_fs as a slave is good. You just missed a tiny piece of the puzzle: The 'Slave::Connection' does not only provide the session interface of the slave's service but also the corresponding 'Session_capability' (it inherits 'CONNECTION::Client', so the 'Slave::Connection' _is_ a session capability). Instead of calling the 'File_system' methods, the media component would pass this 'Session_capability' to init as response to the 'File_system' session request that originated from init.
I assume here the session() method inherited from Genode::Root has to be implemented such that it returns the capability that is the Slave::Connection after that connection has been initiated?
On 14-01-17 20:58, Nobody III wrote:
I have personally done some work related to this issue. First off, I would suggest adding code to allow init to share child services with its parent. I also have a service_router component that I wrote. You may not be able to use it directly, but feel free to take some of the code to use in your media component. Here's a link to the code: https://github.com/NobodyIII/genode/tree/master/repos/os/src/server/service_...
The code is a bit messy, so any help on making it ready to merge into the official Genode repo would be very welcome.
Here, you create a Forwarded_capability struct, which wraps a session capability. It inherits from Id_spaceParent::Client::Element. Why if I may ask? Do I need to do that too?
It eventually invokes env.session() to create a new capability for the forwarded service. Why does it not get its capability from the server, but instead seems to create a new session for a certain service? It seems to me that the Service router does not forward capabilities from children, or am I wrong? Does the cap live somewhere else?
I'm missing the picture a bit here. Can you explain how it works with those capabilities?
Hi Boris,
You just missed a tiny piece of the puzzle: The 'Slave::Connection' does not only provide the session interface of the slave's service but also the corresponding 'Session_capability' (it inherits 'CONNECTION::Client', so the 'Slave::Connection' _is_ a session capability). Instead of calling the 'File_system' methods, the media component would pass this 'Session_capability' to init as response to the 'File_system' session request that originated from init.
I assume here the session() method inherited from Genode::Root has to be implemented such that it returns the capability that is the Slave::Connection after that connection has been initiated?
yes.
Cheers Norman
All right, so far, the forwarding of sessions works. However, when closing a session, there is an issue.
Whenever a client connection is closed, the client calls close() with a session cap on the root. The root then has to look into its open sessions, and compare the session caps of each of those open sessions with the provided cap, and then further cleans up all data related to that session.
For the service router example, it does the following on line 52 (service_router/main.cc):
for (Forwarded_capability *cap = _caps.first(); cap; cap = cap->next()) { if (*cap == session) return cap; }
it checks if these capabilities are equal using the '==' operator. In Capability, this operator compares the internal pointers Native_capability::Data *_data of each Capability object, which points to an object containing metadata such as a Rpc destination and a key.
However, when this session capability is passed as argument to the close() or upgrade() method of the root RPC interface, the unmarshaller at the server side will always create a new Capability object with new data using the Capability_space_tpl::import method (If I am not mistaken), instead of using lookup(). This is done for instance on linux and on nova in ipc.cc. Therefore the cap pointers will never be equal although they point to different duplicate cap data objects with the same content. Is this the correct behaviour?
When testing it with print() by inserting the following line
log("testing... session = ", session, " cap = ", cap, " equal = ", session == cap);
it outputs the following:
session = cap<socket=27,key=474> cap = cap<socket=27,key=474> equal = 0
So the comparison will always fail, and the overloaded close() and upgrade() methods of Root cannot close/upgrade the correct session.
Am I missing something here or is it not possible right now to locally keep track of multiple forwarded session capabilities in this way?
Or is there a workaround?
Regards,
Boris
On 30-01-17 11:09, Norman Feske wrote:
Hi Boris,
You just missed a tiny piece of the puzzle: The 'Slave::Connection' does not only provide the session interface of the slave's service but also the corresponding 'Session_capability' (it inherits 'CONNECTION::Client', so the 'Slave::Connection' _is_ a session capability). Instead of calling the 'File_system' methods, the media component would pass this 'Session_capability' to init as response to the 'File_system' session request that originated from init.
I assume here the session() method inherited from Genode::Root has to be implemented such that it returns the capability that is the Slave::Connection after that connection has been initiated?
yes.
Cheers Norman
Hello Boris,
When testing it with print() by inserting the following line
log("testing... session = ", session, " cap = ", cap, " equal = ", session == cap);
it outputs the following:
session = cap<socket=27,key=474> cap = cap<socket=27,key=474> equal = 0
So the comparison will always fail, and the overloaded close() and upgrade() methods of Root cannot close/upgrade the correct session.
Am I missing something here or is it not possible right now to locally keep track of multiple forwarded session capabilities in this way?
the kernel mechanisms for re-identifying capabilities vary a lot between the various kernels. For example, for seL4 I brought up this problem long ago [1] but there is still no good solution. On NOVA, the situation looks a bit brighter since we extended the kernel in this respect. In base-hw, it works.
[1] http://sel4.systems/pipermail/devel/2014-November/000114.html
For your current scenario, I recommend you to change the comparison to
session.local_name() == cap.local_name()
The 'local_name' corresponds to the 'key' you observe in the output of the capability. It is expected to be unique for the corresponding RPC object.
In the longer term, we try to largely eliminate the need to re-identify capabilities. In particular since Genode 16.11 [2], the interplay between parent and child components no longer relies on the re-identification of capabilities. It employs IDs instead. In fact, under the hood, there are no 'Root' RPC calls between components any more. But at the API level, we have not made the new facilities available yet. For now, I recommend you to use the 'local_name', or the 'Object_pool', which is a data structure that associates capabilities with a component-local object.
[2] http://genode.org/documentation/release-notes/16.11#Asynchronous_parent-chil...
Cheers Norman
Thanks, that solves it for now.
Boris
On 31-01-17 16:30, Norman Feske wrote:
Hello Boris,
When testing it with print() by inserting the following line
log("testing... session = ", session, " cap = ", cap, " equal = ", session == cap);
it outputs the following:
session = cap<socket=27,key=474> cap = cap<socket=27,key=474> equal = 0
So the comparison will always fail, and the overloaded close() and upgrade() methods of Root cannot close/upgrade the correct session.
Am I missing something here or is it not possible right now to locally keep track of multiple forwarded session capabilities in this way?
the kernel mechanisms for re-identifying capabilities vary a lot between the various kernels. For example, for seL4 I brought up this problem long ago [1] but there is still no good solution. On NOVA, the situation looks a bit brighter since we extended the kernel in this respect. In base-hw, it works.
[1] http://sel4.systems/pipermail/devel/2014-November/000114.html
For your current scenario, I recommend you to change the comparison to
session.local_name() == cap.local_name()
The 'local_name' corresponds to the 'key' you observe in the output of the capability. It is expected to be unique for the corresponding RPC object.
In the longer term, we try to largely eliminate the need to re-identify capabilities. In particular since Genode 16.11 [2], the interplay between parent and child components no longer relies on the re-identification of capabilities. It employs IDs instead. In fact, under the hood, there are no 'Root' RPC calls between components any more. But at the API level, we have not made the new facilities available yet. For now, I recommend you to use the 'local_name', or the 'Object_pool', which is a data structure that associates capabilities with a component-local object.
[2] http://genode.org/documentation/release-notes/16.11#Asynchronous_parent-chil...
Cheers Norman
Hi, I've stumbled upon a bit of a problem when using the usb driver coupled with a usb_block driver component. As said:
FWIW, there is an USB block storage driver [1] that uses the Usb raw session and can be used instead of the in-built storage driver of the usb_drv. A custom runtime/management component could monitor the usb_drv device report and spawn the whole stack if it detects a USB storage device. The usb_drv's device report does not contain the device class so far though but adding that to the report is easy.
Now, I'm spawning this usb block driver dynamically, which then tries to connect to the usb driver. In my scenario, the usb driver is found, but at some point the usb_block just hangs at the first time it reaches the line:
iface.bulk_transfer(p, ep, block, &c);
(usb_block/main.cc line 308, called from line 432 as I verified with print statements in the code)
the bulk_transfer method (with block=true) blocks indefinitely.
I do not know what causes this. I think it might be the usb interface specified to the config of usb_block. The config passed to usb_block looks like this:
<config label="usb-3-1" report="yes" writeable="yes" interface="0" lun="0" />
Where usb-3-1 is the correct device label. Omitting the interface and lun fields from the config gives the same error. the usb driver config (which happens to be generated by a usb_report_filter) looks like this:
<config uhci="yes" ehci="yes" xhci="yes"> <hid/> <raw> <report devices="yes"/> <policy label="media -> usb_blk -> usb-1-3" vendor_id="0x058f" product_id="0x6387" bus="0x0001" dev="0x0003"/> </raw> </config>
Since the usb block driver gives the "Device plugged" message, I'd say it is not because it cannot find the right device or driver or anything. The config of the usb driver also allows usb_block to start a usb session (otherwise the policy parser would have thrown an error).
Besides this, the usb driver gives "Could not read string descriptor index: 0" warnings somewhere inside initialize() at line 365 of usb_block. Otherwise, nothing is to be seen. This makes it look like the usb_block is connected to the usb driver. Besides, no other components are providing the usb service in my scenario. I am sure the usb stick itself is formatted properly (it has worked for another scenario and in linux as well).
Can anybody help me with this?
Hello Boris,
* Boris Mulder <boris.mulder@...434...> [2017-02-10 14:12:18 +0100]:
Now, I'm spawning this usb block driver dynamically, which then tries to connect to the usb driver. In my scenario, the usb driver is found, but at some point the usb_block just hangs at the first time it reaches the line:
iface.bulk_transfer(p, ep, block, &c);
(usb_block/main.cc line 308, called from line 432 as I verified with print statements in the code)
the bulk_transfer method (with block=true) blocks indefinitely.
It looks like the INQUIRY command does not complete; I already observed this behaviour with a Delock USB SATA adapter. When using a HDD, we might need to issue a START STOP UNIT command to get the device into working state before executing any other command but I did not look into that so far.
<config uhci="yes" ehci="yes" xhci="yes"> <hid/> <raw> <report devices="yes"/> <policy label="media -> usb_blk -> usb-1-3" vendor_id="0x058f" product_id="0x6387" bus="0x0001" dev="0x0003"/> </raw> </config>
That being said, judging by the vendor and product id, you are using a Transcend USB stick. We had problems with such sticks in past, even when using the usb_drv's in-build storage driver. So could you please try using another vendor, just to make sure, that it is indeed the combination of stick and driver that does not work.
It is mostly likely that we might not wait long enough in the usb_block driver for the device to get itself into a working state or that we do not do all of the configuration necessary, i.e., appyling quirks and stuff, to get it there.
Regards Josef
all right, now I'm using a sandisk usb (<device vendor_id="0x0781" product_id="0x5591"/>), and it does not give this error. However, when I try to list the files and their contents in the root directory using File_system::read, it gives a bunch of other errors:
[init -> media] child "rump_fs1" announces service "File_system"
(here it calls dir() )
[init -> media -> usb_blk] Error: complete error: packet not succeded [init -> media -> usb_blk] Error: request pending: tag: 5 read: 0 buffer: 0x406800 lba: 7423 size: 4096
(here it calls File_system::read() )
[init -> media -> usb_blk] Error: complete error: packet not succeded [init -> media -> usb_blk] Error: request pending: tag: 6 read: 1 buffer: 0x406800 lba: 11511 size: 4096 [init -> media -> usb_blk] Error: complete error: packet not succeded [init -> media -> usb_blk] Error: request pending: tag: 7 read: 1 buffer: 0x406800 lba: 11511 size: 4096
and from here these errors keep coming infinitely (with the same error code, except for the tag which is advancing). It looks like it retries to get some packet all the time without stopping.
apparently, it does see the dir_handle returned by dir("/") as valid (the valid() check succeeds).
Any ideas as to which causes this?
On 10-02-17 18:23, Josef Söntgen wrote:
Hello Boris,
- Boris Mulder <boris.mulder@...434...> [2017-02-10 14:12:18 +0100]:
Now, I'm spawning this usb block driver dynamically, which then tries to connect to the usb driver. In my scenario, the usb driver is found, but at some point the usb_block just hangs at the first time it reaches the line:
iface.bulk_transfer(p, ep, block, &c);
(usb_block/main.cc line 308, called from line 432 as I verified with print statements in the code)
the bulk_transfer method (with block=true) blocks indefinitely.
It looks like the INQUIRY command does not complete; I already observed this behaviour with a Delock USB SATA adapter. When using a HDD, we might need to issue a START STOP UNIT command to get the device into working state before executing any other command but I did not look into that so far.
<config uhci="yes" ehci="yes" xhci="yes"> <hid/> <raw> <report devices="yes"/> <policy label="media -> usb_blk -> usb-1-3" vendor_id="0x058f" product_id="0x6387" bus="0x0001" dev="0x0003"/> </raw> </config>
That being said, judging by the vendor and product id, you are using a Transcend USB stick. We had problems with such sticks in past, even when using the usb_drv's in-build storage driver. So could you please try using another vendor, just to make sure, that it is indeed the combination of stick and driver that does not work.
It is mostly likely that we might not wait long enough in the usb_block driver for the device to get itself into a working state or that we do not do all of the configuration necessary, i.e., appyling quirks and stuff, to get it there.
Regards Josef
I have personally done some work related to this issue. First off, I would suggest adding code to allow init to share child services with its parent. I also have a service_router component that I wrote. You may not be able to use it directly, but feel free to take some of the code to use in your media component. Here's a link to the code: https://github.com/NobodyIII/genode/tree/master/repos/os/src/server/service_...
The code is a bit messy, so any help on making it ready to merge into the official Genode repo would be very welcome.
On Thu, Jan 12, 2017 at 7:47 AM, Boris Mulder <boris.mulder@...434...> wrote:
Dear Genode developers,
Following Martijn's mail at the end of last year, which stated:
As a way to support detachment / reattachment of USB storage I’m thinking about placing the rump_fs and part_blk components in a child subtree of the CLI component that is spawned on demand and cleaned-up after use. But this seems a bit like overkill.
That's exactly the right solution. I don't think that it's overkill either. Spawning up rump_fs and part_blk dynamically is certainly quick enough. Memory-wise, it does not take more resources that a static scenario either. By letting your CLI component implement the protocol outlined above, you have full control over chain of events. Also the aborting rump_fs is nothing fatal anymore but can be gracefully handled by the CLI component. As another benefit, the solution does not need us to supplement the notion of hot-plugging to the file-system and block session interfaces, which would otherwise inflate the complexity of these interfaces (and thereby all the clients that rely on them).
Martijn and I have been thinking of a way to implement this, and came to the conclusion that instead of spawning the stack as children of the CLI component, it might be better to use a new management component in-between. As Josef said earlier:
A custom runtime/management component could monitor the usb_drv device report and spawn the whole stack if it detects a USB storage device. The usb_drv's device report does not contain the device class so far though but adding that to the report is easy.
This is exactly what we're trying to do now. We want to create a custom component called "media" that monitors usb devices by reading the report. It provides a service to other components through which they can request a filesystem session in order to read-write from/to the usb-stick. For this, it spawns the part_blk and rump_fs components as children if the usb is plugged in, and kills them once the usb is plugged out. It roughly looks like this:
rump_fs part_blk
| |
CLI media USB_drv
| | |
init
But this raises a few questions. First, the filesystem interface needs to be presented to the client somehow. To avoid adding another layer of indirection into media, essentially duplicating rump_fs's entire API, we would like the client (in this case CLI) to be directly connected to rump_fs. The client can then ask media if the USB is connected before calling a function from rump_fs.
However, this means that rump_fs provides a service, announces it to its parent (media), and media has to decide what to do with that announce. It can implement rump_fs as a slave, but that way the entire API needs to be copied into media so media can present it as its own service to the client.
So we would like media to announce the filesystem service to its parent (in this case init), so any client can use this service. In the same way, any session request will be passed from CLI to its parent (init), init has to pass it to media, and media passes it to rump_fs. However, the current implementation and specification of genode does not allow services of any server to exist in any level above the server's parent. Services can only be provided to direct parents, and to other components in the parent's subtree. Therefore, copying the API from the child to the parent seems unavoidable.
Another problem that pops up is that media has to spawn all these subcomponents as children. In order to route block session requests from rump-fs to part-blk, media needs to implement some routing policy and effectively serves the same role for these two components as init serves for the system. So we could:
- Copy all necessary code for routing from init to media (which is
almost all code if we want to be generic).
- Let media spawn another init child component (let's call it sub-init
for now) which in turn spawns rump-fs and part-blk and does the routing.
To us, the second option seems much more clean as it involves no code-copying. However, services announced by rump-fs can not be used by other components that are not children of the new init, and are kind of useless. Their announcements can not be passed on to the parents, leaving us with the same problem as we had with rump_fs but with the additional problem that even if there would be a custom way to forward service announces and requests to the parent/child respectively, sub-init has no such policy, and this functionality has to be included in sub-init's code as well, adding a lot of complexity.
Eventually both cases bottle down to the same problem: service announcements to a parent cannot automatically be forwarded to that parent's parent, and likewise, service requests need to be able to be delegated to children of children without a lot of hassle. The only option if I'm correct is implementing this functionality manually, but this does not work if the parent is an existing component that does not support it.
Is there a reason this is never done? For init it is clear that it would never pass an announce to its parent (usually core) or receive session requests from it. But how about the general case?
And how should we solve cases such as the above scenario?
kind regards,
Boris Mulder
Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands +31 35 631 3253 (office)
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today. http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main