Hi, Can I pass Native_capability types as [out] parameters using "Native_capability *" type - with the cross-process mapping happening? Most of the examples use only use the return value to do [out] capabilities. i.e. GENODE_RPC(Rpc_foo,int,foo,Genode::Native_capability *) doesn't seem to work, but there might be something else afoot.
Daniel
Hi Daniel,
passing capabilities as out parameters is expected to work when passing the cap as reference. Please find attached an example in the form of a patch, where I changed the signature of an RPC function to the use of an out parameter of a capability type.
Best regards Norman
Hi Daniel,
On 02/14/2013 08:33 PM, Daniel Waddington wrote:
Hi, Can I pass Native_capability types as [out] parameters using "Native_capability *" type - with the cross-process mapping happening? Most of the examples use only use the return value to do [out] capabilities. i.e. GENODE_RPC(Rpc_foo,int,foo,Genode::Native_capability *) doesn't seem to work, but there might be something else afoot.
actually it should work the way you've described it. I've tested it right now by extending the hello example the same way, and the capability was successfully transfered to the calling client as an argument. By the way, why do you use Native_capability instead of Capability? Although both should work, I would use the generic Capability class, especially in an interface.
Best regards Stefan
Daniel
Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before the end March 2013 and get the hardware for free! Learn more. http://p.sf.net/sfu/sophos-d2d-feb
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan,
In my test server side session function, if I use a core-created capability (via alloc_irq) it works. If I try to do similar behavior locally, the capability appears to be valid (i.e., it exists in the jdb object table) but will not martial correctly - .valid() fails at the client side (note my server process has L4_BASE_FACTORY_CAP).
Can you enlighten me? I am clearly doing something wrong.
status_t Foo::Session_component::create(Genode::Native_capability& result_cap) { #if WORKS Genode::Foc_cpu_session_client cpu(Genode::env()->cpu_session_cap()); result_cap = cpu.alloc_irq(); #endif
#if DOES_NOT_WORK Cap_index * i = Genode::cap_idx_alloc()->alloc_range(1);
l4_msgtag_t res = l4_factory_create_irq(L4_BASE_FACTORY_CAP, i->kcap()); assert(!l4_error(res));
Genode::Native_capability ncap(i); result_cap = ncap; #endif }
BTW, I'm currently using Native_capabilities to test. But I also do not know how to convert from a Native_capability to a typed capability. ;-) Can you show me?
Daniel
On 02/18/2013 01:15 AM, Stefan Kalkowski wrote:
Hi Daniel,
On 02/14/2013 08:33 PM, Daniel Waddington wrote:
Hi, Can I pass Native_capability types as [out] parameters using "Native_capability *" type - with the cross-process mapping happening? Most of the examples use only use the return value to do [out] capabilities. i.e. GENODE_RPC(Rpc_foo,int,foo,Genode::Native_capability *) doesn't seem to work, but there might be something else afoot.
actually it should work the way you've described it. I've tested it right now by extending the hello example the same way, and the capability was successfully transfered to the calling client as an argument. By the way, why do you use Native_capability instead of Capability? Although both should work, I would use the generic Capability class, especially in an interface.
Best regards Stefan
Daniel
Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before the end March 2013 and get the hardware for free! Learn more. http://p.sf.net/sfu/sophos-d2d-feb
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Daniel,
On 02/19/2013 01:38 AM, Daniel Waddington wrote:
Hi Stefan,
In my test server side session function, if I use a core-created capability (via alloc_irq) it works. If I try to do similar behavior locally, the capability appears to be valid (i.e., it exists in the jdb object table) but will not martial correctly - .valid() fails at the client side (note my server process has L4_BASE_FACTORY_CAP).
Can you enlighten me? I am clearly doing something wrong.
I see. Well that's the reason I recommended to put such a service into core to you ;-). The point of the matter is: although your self constructed capabilities have valid indices of the capability name space controlled by the kernel, they have invalid IDs (alias "local_name"s). These IDs are Genode specific, and have nothing to do with the kernel API. They are used to find capabilities, a task already owns. Therefore, all capabilities are stored in a task-local AVL tree. The IDs are used as keys in the AVL tree. A capability without a proper ID (ID == 0) is treated as an invalid capability. When you try to marshal a capability into the message buffer, it is checked whether you try to transfer an invalid capability or not. Without that check, the kernel would pollute the debug messages with a lot of warnings about capability transfers that failed. If it is an invalid one, no mapping gets established. That's why no capability is transfered in your case.
I see two opportunities to solve that problem: First, you implement a proper service in core, or use the existing ones (e.g.: Cpu_session::alloc_irq). If you've to implement your own service in core, or extend an existing one, you can use core's allocator for capability IDs: "Platform::cap_id_alloc()".
If for some reason it is impassable for you to do this in core, you might allocate a capability via core's cap_session service for every capability you want to construct by hand. Thereby, you obtain an ID that is not used otherwise. But be careful, this is the path of pain. Because you have to get rid of the capability, allocated via core, in your task-local AVL tree before getting your own capability into it. This should be done via the smart pointer magic only. Don't remove a capability from the tree by hand, when you still have references to it! With other words, you have to get rid of all references to the capability allocated via core, so that its destructor will do the database removal for you. Later, when you want to free your capability again, you'll have to re-construct the capability allocated via core. So that you can go to core's cap_session service, and free it. Otherwise, you'll have a capability leak in core.
Summing up, I hope I could convince you to implement variant number one ;-).
status_t Foo::Session_component::create(Genode::Native_capability& result_cap) { #if WORKS Genode::Foc_cpu_session_client cpu(Genode::env()->cpu_session_cap()); result_cap = cpu.alloc_irq(); #endif
#if DOES_NOT_WORK Cap_index * i = Genode::cap_idx_alloc()->alloc_range(1);
l4_msgtag_t res = l4_factory_create_irq(L4_BASE_FACTORY_CAP, i->kcap()); assert(!l4_error(res));
Genode::Native_capability ncap(i); result_cap = ncap; #endif }
BTW, I'm currently using Native_capabilities to test. But I also do not know how to convert from a Native_capability to a typed capability. ;-) Can you show me?
You can use the following method defined in "base/include/base/capability.h" for it:
template <typename RPC_INTERFACE> Capability<RPC_INTERFACE> reinterpret_cap_cast(Untyped_capability const &untyped_cap);
Best regards Stefan
Daniel
On 02/18/2013 01:15 AM, Stefan Kalkowski wrote:
Hi Daniel,
On 02/14/2013 08:33 PM, Daniel Waddington wrote:
Hi, Can I pass Native_capability types as [out] parameters using "Native_capability *" type - with the cross-process mapping happening? Most of the examples use only use the return value to do [out] capabilities. i.e. GENODE_RPC(Rpc_foo,int,foo,Genode::Native_capability *) doesn't seem to work, but there might be something else afoot.
actually it should work the way you've described it. I've tested it right now by extending the hello example the same way, and the capability was successfully transfered to the calling client as an argument. By the way, why do you use Native_capability instead of Capability? Although both should work, I would use the generic Capability class, especially in an interface.
Best regards Stefan
Daniel
Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before the end March 2013 and get the hardware for free! Learn more. http://p.sf.net/sfu/sophos-d2d-feb
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials, tech docs, whitepapers, evaluation guides, and opinion stories. Check out the most recent posts - join the conversation now. http://goparallel.sourceforge.net/
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan, OK, I think this clears things up for me - the hazards of Genode hacking! ;)
Thanks for your help as always, Daniel
On 02/19/2013 02:15 AM, Stefan Kalkowski wrote:
Hi Daniel,
On 02/19/2013 01:38 AM, Daniel Waddington wrote:
Hi Stefan,
In my test server side session function, if I use a core-created capability (via alloc_irq) it works. If I try to do similar behavior locally, the capability appears to be valid (i.e., it exists in the jdb object table) but will not martial correctly - .valid() fails at the client side (note my server process has L4_BASE_FACTORY_CAP).
Can you enlighten me? I am clearly doing something wrong.
I see. Well that's the reason I recommended to put such a service into core to you ;-). The point of the matter is: although your self constructed capabilities have valid indices of the capability name space controlled by the kernel, they have invalid IDs (alias "local_name"s). These IDs are Genode specific, and have nothing to do with the kernel API. They are used to find capabilities, a task already owns. Therefore, all capabilities are stored in a task-local AVL tree. The IDs are used as keys in the AVL tree. A capability without a proper ID (ID == 0) is treated as an invalid capability. When you try to marshal a capability into the message buffer, it is checked whether you try to transfer an invalid capability or not. Without that check, the kernel would pollute the debug messages with a lot of warnings about capability transfers that failed. If it is an invalid one, no mapping gets established. That's why no capability is transfered in your case.
I see two opportunities to solve that problem: First, you implement a proper service in core, or use the existing ones (e.g.: Cpu_session::alloc_irq). If you've to implement your own service in core, or extend an existing one, you can use core's allocator for capability IDs: "Platform::cap_id_alloc()".
If for some reason it is impassable for you to do this in core, you might allocate a capability via core's cap_session service for every capability you want to construct by hand. Thereby, you obtain an ID that is not used otherwise. But be careful, this is the path of pain. Because you have to get rid of the capability, allocated via core, in your task-local AVL tree before getting your own capability into it. This should be done via the smart pointer magic only. Don't remove a capability from the tree by hand, when you still have references to it! With other words, you have to get rid of all references to the capability allocated via core, so that its destructor will do the database removal for you. Later, when you want to free your capability again, you'll have to re-construct the capability allocated via core. So that you can go to core's cap_session service, and free it. Otherwise, you'll have a capability leak in core.
Summing up, I hope I could convince you to implement variant number one ;-).
status_t Foo::Session_component::create(Genode::Native_capability& result_cap) { #if WORKS Genode::Foc_cpu_session_client cpu(Genode::env()->cpu_session_cap()); result_cap = cpu.alloc_irq(); #endif
#if DOES_NOT_WORK Cap_index * i = Genode::cap_idx_alloc()->alloc_range(1);
l4_msgtag_t res = l4_factory_create_irq(L4_BASE_FACTORY_CAP, i->kcap()); assert(!l4_error(res));
Genode::Native_capability ncap(i); result_cap = ncap; #endif }
BTW, I'm currently using Native_capabilities to test. But I also do not know how to convert from a Native_capability to a typed capability. ;-) Can you show me?
You can use the following method defined in "base/include/base/capability.h" for it:
template <typename RPC_INTERFACE> Capability<RPC_INTERFACE> reinterpret_cap_cast(Untyped_capability const &untyped_cap);
Best regards Stefan
Daniel
On 02/18/2013 01:15 AM, Stefan Kalkowski wrote:
Hi Daniel,
On 02/14/2013 08:33 PM, Daniel Waddington wrote:
Hi, Can I pass Native_capability types as [out] parameters using "Native_capability *" type - with the cross-process mapping happening? Most of the examples use only use the return value to do [out] capabilities. i.e. GENODE_RPC(Rpc_foo,int,foo,Genode::Native_capability *) doesn't seem to work, but there might be something else afoot.
actually it should work the way you've described it. I've tested it right now by extending the hello example the same way, and the capability was successfully transfered to the calling client as an argument. By the way, why do you use Native_capability instead of Capability? Although both should work, I would use the generic Capability class, especially in an interface.
Best regards Stefan
Daniel
Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before the end March 2013 and get the hardware for free! Learn more. http://p.sf.net/sfu/sophos-d2d-feb
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials, tech docs, whitepapers, evaluation guides, and opinion stories. Check out the most recent posts - join the conversation now. http://goparallel.sourceforge.net/
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Daniel,
OK, I think this clears things up for me - the hazards of Genode hacking! ;)
I am not quite sure what you mean by "hazard". The mechanism Stefan described is actually a safety net that relieves the users of the framework from the burden of managing the lifetime of capabilities manually. I'd to say that doing the lifetime management of capabilities manually would be hazardous. In contrast, the Genode API provides a coherent and safe way that avoids leaking capabilities (and the associated kernel resources).
The problem you are facing right now is that you are deliberately breaking through the abstraction of the API and thereby (unknowingly) violate an invariant that is normally guaranteed by the Genode API implementation. In particular, you create capabilities out of thin air, which is not possible via the legitimate use of the API. Because this invariant is not satisfied anymore, another part of the API (RPC marshalling of capabilities) that relies on it does not work as expected.
So I support Stefan with his suggestion of his first solution (letting core create capabilities and export them via a core service) as this solution will not work against the design of Genode.
That said, there might be third solution, which is the creation of a valid ID manually without involving core's CAP service. This is done for constructing the parent capability at the process startup:
https://github.com/genodelabs/genode/blob/master/base-foc/src/platform/_main...
Following this procedure, a valid Genode capability gets created, which can then principally be delegated via RPC. By using 'cap_map()->insert()', the code satisfies the invariant needed by the RPC mechanism to marshal the capability.
This way, you could wrap a Fiasco.OC capability selector (e.g., a scheduler cap selector) into a Genode capability in order to delegate it to another process. I guess, this is what you'd like to do?
@Stefan: Would that be a feasible approach?
Cheers Norman
On 02/20/2013 11:53 AM, Norman Feske wrote:
Hi Daniel,
OK, I think this clears things up for me - the hazards of Genode hacking! ;)
I am not quite sure what you mean by "hazard". The mechanism Stefan described is actually a safety net that relieves the users of the framework from the burden of managing the lifetime of capabilities manually. I'd to say that doing the lifetime management of capabilities manually would be hazardous. In contrast, the Genode API provides a coherent and safe way that avoids leaking capabilities (and the associated kernel resources).
The problem you are facing right now is that you are deliberately breaking through the abstraction of the API and thereby (unknowingly) violate an invariant that is normally guaranteed by the Genode API implementation. In particular, you create capabilities out of thin air, which is not possible via the legitimate use of the API. Because this invariant is not satisfied anymore, another part of the API (RPC marshalling of capabilities) that relies on it does not work as expected.
So I support Stefan with his suggestion of his first solution (letting core create capabilities and export them via a core service) as this solution will not work against the design of Genode.
That said, there might be third solution, which is the creation of a valid ID manually without involving core's CAP service. This is done for constructing the parent capability at the process startup:
https://github.com/genodelabs/genode/blob/master/base-foc/src/platform/_main...
Following this procedure, a valid Genode capability gets created, which can then principally be delegated via RPC. By using 'cap_map()->insert()', the code satisfies the invariant needed by the RPC mechanism to marshal the capability.
This way, you could wrap a Fiasco.OC capability selector (e.g., a scheduler cap selector) into a Genode capability in order to delegate it to another process. I guess, this is what you'd like to do?
@Stefan: Would that be a feasible approach?
Well, not really. The parent capability is a corner case. It's the only capability that is inserted manually without usage of the IPC framework, because we need it to do the first IPC at all. To enable usage of the parent capability, when starting a new child, its parent stores the capability ID at a specific place (&_parent_cap), when setting up its address space. For all capabilities "created out of thin air" the problem remains to get a valid capability ID.
A viable third way, without using core's CAP service, would be to shrink the ID range used by core, and use the IDs, which become free. Of course, the problem remains to divide up the IDs between potentially different tasks.h
@Daniel: The burden of having global capability IDs, a capability registry, retrieval etc. wouldn't exist, if the kernel API would allow to identify capability duplicates when receiving one. Currently, the only way to identify, whether a received capability is already existent in the protection domain, is either to compare it against all capabilities one possesses, or by using an additional identifier. The first solution obviously is not feasible, because every comparison between two capabilities means one kernel syscall. That means, if you own 100 capabilities you've to do 100 syscalls when receiving a new capability. Therefore, we've chosen the second approach of using a globally unique ID that is sent in addition to the capability. A capability-based kernel, where this additional ID isn't needed anymore, is for example NOVA.
Best regards Stefan
Cheers Norman
Thanks Norman and Stefan for your help. For the immediate need I will use alloc_irq together with the ICU cap limited to my special core process. If I need to get raw caps out again, I think I will look into using core's cap session or partitioning out the cap id space.
Daniel
On 02/20/2013 04:36 AM, Stefan Kalkowski wrote:
On 02/20/2013 11:53 AM, Norman Feske wrote:
Hi Daniel,
OK, I think this clears things up for me - the hazards of Genode hacking! ;)
I am not quite sure what you mean by "hazard". The mechanism Stefan described is actually a safety net that relieves the users of the framework from the burden of managing the lifetime of capabilities manually. I'd to say that doing the lifetime management of capabilities manually would be hazardous. In contrast, the Genode API provides a coherent and safe way that avoids leaking capabilities (and the associated kernel resources).
The problem you are facing right now is that you are deliberately breaking through the abstraction of the API and thereby (unknowingly) violate an invariant that is normally guaranteed by the Genode API implementation. In particular, you create capabilities out of thin air, which is not possible via the legitimate use of the API. Because this invariant is not satisfied anymore, another part of the API (RPC marshalling of capabilities) that relies on it does not work as expected.
So I support Stefan with his suggestion of his first solution (letting core create capabilities and export them via a core service) as this solution will not work against the design of Genode.
That said, there might be third solution, which is the creation of a valid ID manually without involving core's CAP service. This is done for constructing the parent capability at the process startup:
https://github.com/genodelabs/genode/blob/master/base-foc/src/platform/_main...
Following this procedure, a valid Genode capability gets created, which can then principally be delegated via RPC. By using 'cap_map()->insert()', the code satisfies the invariant needed by the RPC mechanism to marshal the capability.
This way, you could wrap a Fiasco.OC capability selector (e.g., a scheduler cap selector) into a Genode capability in order to delegate it to another process. I guess, this is what you'd like to do?
@Stefan: Would that be a feasible approach?
Well, not really. The parent capability is a corner case. It's the only capability that is inserted manually without usage of the IPC framework, because we need it to do the first IPC at all. To enable usage of the parent capability, when starting a new child, its parent stores the capability ID at a specific place (&_parent_cap), when setting up its address space. For all capabilities "created out of thin air" the problem remains to get a valid capability ID.
A viable third way, without using core's CAP service, would be to shrink the ID range used by core, and use the IDs, which become free. Of course, the problem remains to divide up the IDs between potentially different tasks.h
@Daniel: The burden of having global capability IDs, a capability registry, retrieval etc. wouldn't exist, if the kernel API would allow to identify capability duplicates when receiving one. Currently, the only way to identify, whether a received capability is already existent in the protection domain, is either to compare it against all capabilities one possesses, or by using an additional identifier. The first solution obviously is not feasible, because every comparison between two capabilities means one kernel syscall. That means, if you own 100 capabilities you've to do 100 syscalls when receiving a new capability. Therefore, we've chosen the second approach of using a globally unique ID that is sent in addition to the capability. A capability-based kernel, where this additional ID isn't needed anymore, is for example NOVA.
Best regards Stefan
Cheers Norman