Dear Genode community,
thanks to you [1], I could implement my Checkpoint/Restore mechanism on Genode/Fiasco.OC. I also added the incremental checkpoint optimization, to stored only changed memory regions compared to the last checkpoint (although this is not working reliably due to a Fiasco.OC bug, which Stefan Kalkowski found for me [2]). I also managed to checkpoint the capability map and restore it with new badges and insert missing capabilities into the capability space of Fiasco.OC.
[1] https://sourceforge.net/p/genode/mailman/message/35322604/ [2] https://sourceforge.net/p/genode/mailman/message/35377269/
My problem is, although I restore all RPC objects, especially the instruction and stack pointer of the main thread, and the capability map and space, the target component just starts its execution from the beginning of its Component::construct function.
My approach: For the restore phase, I use Genode's native bootstrap mechanism (i.e. I create a Genode::Child object) until it requests a LOG session from my Checkpoint/Restore component. I force a LOG session request in ::Constructor_component::construct() just before "Genode::call_component_construct(env);" in
https://github.com/genodelabs/genode/blob/16.08/repos/base/src/lib/base/entr...
Until the session request several RAM dataspaces are created, among other RPC objects, and attached to the address space. In my restore mechanism I identify the RPC objects, which were created by the bootstrap/startup mechanism, and only restore their state. After that point, I recreate and restore the state of all other RPC objects which are known by the child component. At last, I restore the capability map and space.
During that process the mandatory CPU threads are identified (three of them: "ep", "signal_handler", and "childs_rom_name") and restored to their checkpointed state, especially the ip and sp registers. I did that through the use of Cpu_thread::state(Thread_state), but without luck. Also, although I know that the CPU threads were already started, I tried to call Cpu_thread::start(ip, sp), but without success.
After the restoration which happens entirely during the LOG session request of the child, my component returns with a valid session object to the child. Now the child should continue the work from the point where it was checkpointed, but it continues its execution right after the LOG session request, ignoring the setting of the instruction pointer.
The source code of the restore CPU thread state is found in [3]. I used run script [4] for the tests.
[3] https://github.com/702nADOS/genode-CheckpointRestore-SharedMemory/blob/660a8... [4] https://github.com/702nADOS/genode-CheckpointRestore-SharedMemory/blob/660a8...
Curiously, the child runs just as nothing happened, although its stack area was also manipulated.
Perhaps my approach by reusing the bootstrap/startup mechanism is not destined to work, or maybe I have missed some important points in this mechanism. If so, please point me to the problem. I would also consider other restoration approaches, for example, by recreating all RPC objects manually and insert them into the capability map/space. What are your thoughts on my approach? Can it work? Does something else work better?
Kind regards, Denis
Hello Denis,
thanks to you [1], I could implement my Checkpoint/Restore mechanism on Genode/Fiasco.OC. I also added the incremental checkpoint optimization, to stored only changed memory regions compared to the last checkpoint (although this is not working reliably due to a Fiasco.OC bug, which Stefan Kalkowski found for me [2]). I also managed to checkpoint the capability map and restore it with new badges and insert missing capabilities into the capability space of Fiasco.OC.
this is impressive!
My problem is, although I restore all RPC objects, especially the instruction and stack pointer of the main thread, and the capability map and space, the target component just starts its execution from the beginning of its Component::construct function.
This confuses me. If you restore the states of all threads, why don't the threads start at their checkpointed state? Starting the execution at the 'Component::construct' function feels wrong. In particular, code that is expected to be executed only once is in fact executed twice, once in by the original component and a second time after waking up the restored component.
My approach: For the restore phase, I use Genode's native bootstrap mechanism (i.e. I create a Genode::Child object) until it requests a LOG session from my Checkpoint/Restore component. I force a LOG session request in ::Constructor_component::construct() just before "Genode::call_component_construct(env);" in
https://github.com/genodelabs/genode/blob/16.08/repos/base/src/lib/base/entr...
Until the session request several RAM dataspaces are created, among other RPC objects, and attached to the address space. In my restore mechanism I identify the RPC objects, which were created by the bootstrap/startup mechanism, and only restore their state.
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
During that process the mandatory CPU threads are identified (three of them: "ep", "signal_handler", and "childs_rom_name") and restored to their checkpointed state, especially the ip and sp registers. I did that through the use of Cpu_thread::state(Thread_state), but without luck. Also, although I know that the CPU threads were already started, I tried to call Cpu_thread::start(ip, sp), but without success.
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
After the restoration which happens entirely during the LOG session request of the child, my component returns with a valid session object to the child. Now the child should continue the work from the point where it was checkpointed, but it continues its execution right after the LOG session request, ignoring the setting of the instruction pointer.
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
Cheers Norman
Hello Norman,
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
Thank you for the hint. I will try out your approach
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Dear Genode community,
Preliminary: We implemented a Checkpoint/Restore mechanism on basis of Genode/Fiasco.OC (Thanks to the great help of you all). We store the state of the target component by monitoring its RPC function calls which go through the parent component (= our Checkpoint/Restore component). The capability space is indirectly checkpointed through the capability map. The restoring of the state of the target is done by restoring the RPC objects used by the target component (e.g. PD session, dataspaces, region maps, etc.). The capabilities of the restored objects have to be also restored in the capability space (kernel) and in the capability map (userspace).
For restoring the target component Norman suggested the usage of the Genode::Child constructor with an invalid ROM dataspace capability which does not trigger the bootstrap mechanism. Thus, we have the full control of inserting the capabilities of the restored RPC objects into the capability space/map.
Our problem is the following: We restore the RPC objects and insert them into the capability map and then in the capability space. From the kernel point of view these capabilities are all "IPC Gates". Unfortunately, there was also an IRQ kernel object created by the bootstrap mechanism. The following table shows the kernel debugger output of the capability space of the freshly bootstraped target component:
000204 :0016e* Gate 0015f* Gate 00158* Gate 00152* Gate 000208 :00154* Gate 0017e* Gate 0017f* Gate 00179* Gate 00020c :00180* Gate 00188* Gate -- -- 000210 : -- -- 0018a* Gate 0018c* Gate 000214 :0018e* Gate 00196* Gate 00145* Gate 00144* IRQ 000218 :00198* Gate -- -- -- 00021c : -- 0019c* Gate -- --
At address 000217 you can see the IRQ kernel object. What does this object do, how can we store/monitor it, and how can it be restored? Where can we find the source code which creates this object in Genode's bootstrap code?
Best regards, Denis
On 11.12.2016 13:01, Denis Huber wrote:
Hello Norman,
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
Thank you for the hint. I will try out your approach
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hello Dennis,
On 03/27/2017 04:14 PM, Denis Huber wrote:
Dear Genode community,
Preliminary: We implemented a Checkpoint/Restore mechanism on basis of Genode/Fiasco.OC (Thanks to the great help of you all). We store the state of the target component by monitoring its RPC function calls which go through the parent component (= our Checkpoint/Restore component). The capability space is indirectly checkpointed through the capability map. The restoring of the state of the target is done by restoring the RPC objects used by the target component (e.g. PD session, dataspaces, region maps, etc.). The capabilities of the restored objects have to be also restored in the capability space (kernel) and in the capability map (userspace).
For restoring the target component Norman suggested the usage of the Genode::Child constructor with an invalid ROM dataspace capability which does not trigger the bootstrap mechanism. Thus, we have the full control of inserting the capabilities of the restored RPC objects into the capability space/map.
Our problem is the following: We restore the RPC objects and insert them into the capability map and then in the capability space. From the kernel point of view these capabilities are all "IPC Gates". Unfortunately, there was also an IRQ kernel object created by the bootstrap mechanism. The following table shows the kernel debugger output of the capability space of the freshly bootstraped target component:
000204 :0016e* Gate 0015f* Gate 00158* Gate 00152* Gate 000208 :00154* Gate 0017e* Gate 0017f* Gate 00179* Gate 00020c :00180* Gate 00188* Gate -- -- 000210 : -- -- 0018a* Gate 0018c* Gate 000214 :0018e* Gate 00196* Gate 00145* Gate 00144* IRQ 000218 :00198* Gate -- -- -- 00021c : -- 0019c* Gate -- --
At address 000217 you can see the IRQ kernel object. What does this object do, how can we store/monitor it, and how can it be restored? Where can we find the source code which creates this object in Genode's bootstrap code?
The IRQ kernel object you refer to is used by the "signal_handler" thread to block for signals of core's corresponding service. It is a base-foc specific internal core RPC object[1] that is used by the signal handler[2] and the related capability gets returned by the call to 'alloc_signal_source()' provided by the PD session[3].
I have to admit, I did not follow your current implementation approach in depth. Thereby, I do not know how to exactly handle this specific signal hander thread and its semaphore-like IRQ object, but maybe the references already help you further.
Regards Stefan
[1] repos/base-foc/src/core/signal_source_component.cc [2] repos/base-foc/src/lib/base/signal_source_client.cc [3] repos/base/src/core/include/pd_session_component.h
Best regards, Denis
On 11.12.2016 13:01, Denis Huber wrote:
Hello Norman,
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
Thank you for the hint. I will try out your approach
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi everyone,
after Denis Huber left the project, I am in charge of making our checkpoint/restore component work. Therefore i would like to ask some more questions on the IRQ kernel object.
1. When is the IRQ object created? Does every component have an own IRQ object?
I tried to figure out when the IRQ object is mapped into the object space of a component on its startup. Therefore I took a look at the code in [repos/base-foc/src/core/signal_source_component.cc]. The IRQ object appears in the object space after the "_sem = <Rpc_request_semaphore>();" statement in the constructor.
As far as I could follow the implementation the "request_semaphore" RPC call is answered by the "Signal_source_rpc_object" in [base-foc/src/include/signal_source/rpc_object.h] which returns/delegates the native capability "_blocking_semaphore" which is an attribute of the "Signal_source_rpc_object". It seems to me that the IRQ object already exists at this point and is only delegated to the component.
But when is the IRQ object created and by whom? Is it created when a new PD session is created?
2. Does the IRQ object carry any information? Do I need to checkpoint this information in order to be able to recreate the object properly during a restore process? Is the IRQ object created automatically (and i only have to make sure that the object is getting mapped into the object space of the target) or do i have to create it manually?
In our current implementation of the restore process we restore a component by recreating its sessions to core services (+timer) with the help of information we gathered using a custom runtime environment. After the sessions are restored we place them in the object space at the correct position. Will I also have to somehow store information about the IRQ object? Or is it just some object that needs to exist?
Kind Regards, David
Am 29.03.2017 um 14:05 schrieb Stefan Kalkowski:
Hello Dennis,
On 03/27/2017 04:14 PM, Denis Huber wrote:
Dear Genode community,
Preliminary: We implemented a Checkpoint/Restore mechanism on basis of Genode/Fiasco.OC (Thanks to the great help of you all). We store the state of the target component by monitoring its RPC function calls which go through the parent component (= our Checkpoint/Restore component). The capability space is indirectly checkpointed through the capability map. The restoring of the state of the target is done by restoring the RPC objects used by the target component (e.g. PD session, dataspaces, region maps, etc.). The capabilities of the restored objects have to be also restored in the capability space (kernel) and in the capability map (userspace).
For restoring the target component Norman suggested the usage of the Genode::Child constructor with an invalid ROM dataspace capability which does not trigger the bootstrap mechanism. Thus, we have the full control of inserting the capabilities of the restored RPC objects into the capability space/map.
Our problem is the following: We restore the RPC objects and insert them into the capability map and then in the capability space. From the kernel point of view these capabilities are all "IPC Gates". Unfortunately, there was also an IRQ kernel object created by the bootstrap mechanism. The following table shows the kernel debugger output of the capability space of the freshly bootstraped target component:
000204 :0016e* Gate 0015f* Gate 00158* Gate 00152* Gate 000208 :00154* Gate 0017e* Gate 0017f* Gate 00179* Gate 00020c :00180* Gate 00188* Gate -- -- 000210 : -- -- 0018a* Gate 0018c* Gate 000214 :0018e* Gate 00196* Gate 00145* Gate 00144* IRQ 000218 :00198* Gate -- -- -- 00021c : -- 0019c* Gate -- --
At address 000217 you can see the IRQ kernel object. What does this object do, how can we store/monitor it, and how can it be restored? Where can we find the source code which creates this object in Genode's bootstrap code?
The IRQ kernel object you refer to is used by the "signal_handler" thread to block for signals of core's corresponding service. It is a base-foc specific internal core RPC object[1] that is used by the signal handler[2] and the related capability gets returned by the call to 'alloc_signal_source()' provided by the PD session[3].
I have to admit, I did not follow your current implementation approach in depth. Thereby, I do not know how to exactly handle this specific signal hander thread and its semaphore-like IRQ object, but maybe the references already help you further.
Regards Stefan
[1] repos/base-foc/src/core/signal_source_component.cc [2] repos/base-foc/src/lib/base/signal_source_client.cc [3] repos/base/src/core/include/pd_session_component.h
Best regards, Denis
On 11.12.2016 13:01, Denis Huber wrote:
Hello Norman,
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
Thank you for the hint. I will try out your approach
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi everyone,
As I'm stuck with this problem I would appreciate any kind of advice.
Best Regards, David
Am 07.06.2017 um 15:13 schrieb David Werner:
Hi everyone,
after Denis Huber left the project, I am in charge of making our checkpoint/restore component work. Therefore i would like to ask some more questions on the IRQ kernel object.
- When is the IRQ object created? Does every component have an own
IRQ object?
I tried to figure out when the IRQ object is mapped into the object space of a component on its startup. Therefore I took a look at the code in [repos/base-foc/src/core/signal_source_component.cc]. The IRQ object appears in the object space after the "_sem = <Rpc_request_semaphore>();" statement in the constructor.
As far as I could follow the implementation the "request_semaphore" RPC call is answered by the "Signal_source_rpc_object" in [base-foc/src/include/signal_source/rpc_object.h] which returns/delegates the native capability "_blocking_semaphore" which is an attribute of the "Signal_source_rpc_object". It seems to me that the IRQ object already exists at this point and is only delegated to the component.
But when is the IRQ object created and by whom? Is it created when a new PD session is created?
- Does the IRQ object carry any information? Do I need to checkpoint
this information in order to be able to recreate the object properly during a restore process? Is the IRQ object created automatically (and i only have to make sure that the object is getting mapped into the object space of the target) or do i have to create it manually?
In our current implementation of the restore process we restore a component by recreating its sessions to core services (+timer) with the help of information we gathered using a custom runtime environment. After the sessions are restored we place them in the object space at the correct position. Will I also have to somehow store information about the IRQ object? Or is it just some object that needs to exist?
Kind Regards, David
Am 29.03.2017 um 14:05 schrieb Stefan Kalkowski:
Hello Dennis,
On 03/27/2017 04:14 PM, Denis Huber wrote:
Dear Genode community,
Preliminary: We implemented a Checkpoint/Restore mechanism on basis of Genode/Fiasco.OC (Thanks to the great help of you all). We store the state of the target component by monitoring its RPC function calls which go through the parent component (= our Checkpoint/Restore component). The capability space is indirectly checkpointed through the capability map. The restoring of the state of the target is done by restoring the RPC objects used by the target component (e.g. PD session, dataspaces, region maps, etc.). The capabilities of the restored objects have to be also restored in the capability space (kernel) and in the capability map (userspace).
For restoring the target component Norman suggested the usage of the Genode::Child constructor with an invalid ROM dataspace capability which does not trigger the bootstrap mechanism. Thus, we have the full control of inserting the capabilities of the restored RPC objects into the capability space/map.
Our problem is the following: We restore the RPC objects and insert them into the capability map and then in the capability space. From the kernel point of view these capabilities are all "IPC Gates". Unfortunately, there was also an IRQ kernel object created by the bootstrap mechanism. The following table shows the kernel debugger output of the capability space of the freshly bootstraped target component:
000204 :0016e* Gate 0015f* Gate 00158* Gate 00152* Gate 000208 :00154* Gate 0017e* Gate 0017f* Gate 00179* Gate 00020c :00180* Gate 00188* Gate -- -- 000210 : -- -- 0018a* Gate 0018c* Gate 000214 :0018e* Gate 00196* Gate 00145* Gate 00144* IRQ 000218 :00198* Gate -- -- -- 00021c : -- 0019c* Gate -- --
At address 000217 you can see the IRQ kernel object. What does this object do, how can we store/monitor it, and how can it be restored? Where can we find the source code which creates this object in Genode's bootstrap code?
The IRQ kernel object you refer to is used by the "signal_handler" thread to block for signals of core's corresponding service. It is a base-foc specific internal core RPC object[1] that is used by the signal handler[2] and the related capability gets returned by the call to 'alloc_signal_source()' provided by the PD session[3].
I have to admit, I did not follow your current implementation approach in depth. Thereby, I do not know how to exactly handle this specific signal hander thread and its semaphore-like IRQ object, but maybe the references already help you further.
Regards Stefan
[1] repos/base-foc/src/core/signal_source_component.cc [2] repos/base-foc/src/lib/base/signal_source_client.cc [3] repos/base/src/core/include/pd_session_component.h
Best regards, Denis
On 11.12.2016 13:01, Denis Huber wrote:
Hello Norman,
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
Thank you for the hint. I will try out your approach
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi David,
On 06/21/2017 12:12 PM, David Werner wrote:
Hi everyone,
As I'm stuck with this problem I would appreciate any kind of advice.
sorry for the long delay. See below for my comments.
Best Regards, David
Am 07.06.2017 um 15:13 schrieb David Werner:
Hi everyone,
after Denis Huber left the project, I am in charge of making our checkpoint/restore component work. Therefore i would like to ask some more questions on the IRQ kernel object.
- When is the IRQ object created? Does every component have an own
IRQ object?
It is created when the signal source client (separate thread) is created. The signal source client is created once while bootstrapping a component. It receives pure signals from the corresponding core service and delivers the signals locally, e.g.: unblocks an entrypoint that is waiting for signals and IPC.
I tried to figure out when the IRQ object is mapped into the object space of a component on its startup. Therefore I took a look at the code in [repos/base-foc/src/core/signal_source_component.cc]. The IRQ object appears in the object space after the "_sem = <Rpc_request_semaphore>();" statement in the constructor.
As far as I could follow the implementation the "request_semaphore" RPC call is answered by the "Signal_source_rpc_object" in [base-foc/src/include/signal_source/rpc_object.h] which returns/delegates the native capability "_blocking_semaphore" which is an attribute of the "Signal_source_rpc_object". It seems to me that the IRQ object already exists at this point and is only delegated to the component.
But when is the IRQ object created and by whom? Is it created when a new PD session is created?
It is created by core, when a new SIGNAL session is opened to it. This is typically done during the startup of a new component. You are right, the request_semaphore() call then just transfers the IRQ object's capability from core to the requesting component.
- Does the IRQ object carry any information? Do I need to checkpoint
this information in order to be able to recreate the object properly during a restore process? Is the IRQ object created automatically (and i only have to make sure that the object is getting mapped into the object space of the target) or do i have to create it manually?
The IRQ object does not carry information, but its state changes when a thread attaches or detaches from it. So if you re-create that specific IRQ object, the signal handler thread that is using the signal source client has to attach again to the replaced IRQ object.
In our current implementation of the restore process we restore a component by recreating its sessions to core services (+timer) with the help of information we gathered using a custom runtime environment. After the sessions are restored we place them in the object space at the correct position. Will I also have to somehow store information about the IRQ object? Or is it just some object that needs to exist?
As being said, this specific IRQ object is part of the SIGNAL session and its client state. I'm not sure how your restore mechanism works exactly, but if work is done within the component that gets restored, you can do the re-attachment there. Otherwise, you would need to change the request_semaphore call. So that the information of which thread gets attached is part of the server side in core. Instead of attaching to the IRQ object itself, the signal handler thread transfers its identity to core via request_semaphore. Core attaches the thread and delivers the capability. Whenever request_semaphore is called, you detach formerly attached threads, as well as when the session is closed. Does that make sense for you?
Regards Stefan
Kind Regards, David
Am 29.03.2017 um 14:05 schrieb Stefan Kalkowski:
Hello Dennis,
On 03/27/2017 04:14 PM, Denis Huber wrote:
Dear Genode community,
Preliminary: We implemented a Checkpoint/Restore mechanism on basis of Genode/Fiasco.OC (Thanks to the great help of you all). We store the state of the target component by monitoring its RPC function calls which go through the parent component (= our Checkpoint/Restore component). The capability space is indirectly checkpointed through the capability map. The restoring of the state of the target is done by restoring the RPC objects used by the target component (e.g. PD session, dataspaces, region maps, etc.). The capabilities of the restored objects have to be also restored in the capability space (kernel) and in the capability map (userspace).
For restoring the target component Norman suggested the usage of the Genode::Child constructor with an invalid ROM dataspace capability which does not trigger the bootstrap mechanism. Thus, we have the full control of inserting the capabilities of the restored RPC objects into the capability space/map.
Our problem is the following: We restore the RPC objects and insert them into the capability map and then in the capability space. From the kernel point of view these capabilities are all "IPC Gates". Unfortunately, there was also an IRQ kernel object created by the bootstrap mechanism. The following table shows the kernel debugger output of the capability space of the freshly bootstraped target component:
000204 :0016e* Gate 0015f* Gate 00158* Gate 00152* Gate 000208 :00154* Gate 0017e* Gate 0017f* Gate 00179* Gate 00020c :00180* Gate 00188* Gate -- -- 000210 : -- -- 0018a* Gate 0018c* Gate 000214 :0018e* Gate 00196* Gate 00145* Gate 00144* IRQ 000218 :00198* Gate -- -- -- 00021c : -- 0019c* Gate -- --
At address 000217 you can see the IRQ kernel object. What does this object do, how can we store/monitor it, and how can it be restored? Where can we find the source code which creates this object in Genode's bootstrap code?
The IRQ kernel object you refer to is used by the "signal_handler" thread to block for signals of core's corresponding service. It is a base-foc specific internal core RPC object[1] that is used by the signal handler[2] and the related capability gets returned by the call to 'alloc_signal_source()' provided by the PD session[3].
I have to admit, I did not follow your current implementation approach in depth. Thereby, I do not know how to exactly handle this specific signal hander thread and its semaphore-like IRQ object, but maybe the references already help you further.
Regards Stefan
[1] repos/base-foc/src/core/signal_source_component.cc [2] repos/base-foc/src/lib/base/signal_source_client.cc [3] repos/base/src/core/include/pd_session_component.h
Best regards, Denis
On 11.12.2016 13:01, Denis Huber wrote:
Hello Norman,
What you observe here is the ELF loading of the child's binary. As part of the 'Child' object, the so-called '_process' member is constructed. You can find the corresponding code at 'base/src/lib/base/child_process.cc'. The code parses the ELF executable and loads the program segments, specifically the read-only text segment and the read-writable data/bss segment. For the latter, a RAM dataspace is allocated and filled with the content of the ELF binary's data. In your case, when resuming, this procedure is wrong. After all, you want to supply the checkpointed data to the new child, not the initial data provided by the ELF binary.
Fortunately, I encountered the same problem when implementing fork for noux. I solved it by letting the 'Child_process' constructor accept an invalid dataspace capability as ELF argument. This has two effects: First, the ELF loading is skipped (obviously - there is no ELF to load). And second the creation of the initial thread is skipped as well.
In short, by supplying an invalid dataspace capability as binary for the new child, you avoid all those unwanted operations. The new child will not start at 'Component::construct'. You will have to manually create and start the threads of the new child via the PD and CPU session interfaces.
Thank you for the hint. I will try out your approach
The approach looks good. I presume that you encounter base-foc-specific peculiarities of the thread-creation procedure. I would try to follow the code in 'base-foc/src/core/platform_thread.cc' to see what the interaction of core with the kernel looks like. The order of operations might be important.
One remaining problem may be that - even though you may by able the restore most part of the thread state - the kernel-internal state cannot be captured. E.g., think of a thread that was blocking in the kernel via 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new thread can naturally not be in this blocking state because the kernel's state is not part of the checkpointed state. The new thread would possibly start its execution at the instruction pointer of the syscall and issue system call again, but I am not sure what really happens in practice.
Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
I think that you don't need the LOG-session quirk if you follow my suggestion to skip the ELF loading for the restored component altogether. Could you give it a try?
You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hello,
On 22.06.2017 16:08, Stefan Kalkowski wrote:> It is created by core, when a new SIGNAL session is opened to it.
Actually, the SIGNAL session interface does not exist anymore. Its functionality moved into core's PD service in Genode 16.05.
@David, to understand the signal-delivery mechanism via core, the following parts of the implementation are most interesting.
Implementation of core's PD service:
repos/base/src/core/include/pd_session_component.h repos/base/src/core/pd_session_component.cc
The part of the PD-session interface responsible to the asynchronous notifications:
repos/base/src/core/include/signal_broker.h repos/base/src/core/include/signal_source_component.h
Fiasco.OC-specific back end (interaction with Fiasco.OC's virtual IRQs):
repos/base-foc/src/core/signal_source_component.cc
Client-side couterpart (present in each component):
repos/base-foc/src/lib/base/signal_source_client.cc
Cheers Norman
Hi Stefan,
thank you for your answer!
Am 22.06.2017 um 16:08 schrieb Stefan Kalkowski:
Hi David,
On 06/21/2017 12:12 PM, David Werner wrote:
Hi everyone,
As I'm stuck with this problem I would appreciate any kind of advice.
sorry for the long delay. See below for my comments.
No problem. I'm thankful you provided that information.
Hi everyone,
after Denis Huber left the project, I am in charge of making our checkpoint/restore component work. Therefore i would like to ask some more questions on the IRQ kernel object.
- When is the IRQ object created? Does every component have an own
IRQ object?
It is created when the signal source client (separate thread) is created. The signal source client is created once while bootstrapping a component. It receives pure signals from the corresponding core service and delivers the signals locally, e.g.: unblocks an entrypoint that is waiting for signals and IPC.
I tried to figure out when the IRQ object is mapped into the object space of a component on its startup. Therefore I took a look at the code in [repos/base-foc/src/core/signal_source_component.cc]. The IRQ object appears in the object space after the "_sem = <Rpc_request_semaphore>();" statement in the constructor.
As far as I could follow the implementation the "request_semaphore" RPC call is answered by the "Signal_source_rpc_object" in [base-foc/src/include/signal_source/rpc_object.h] which returns/delegates the native capability "_blocking_semaphore" which is an attribute of the "Signal_source_rpc_object". It seems to me that the IRQ object already exists at this point and is only delegated to the component.
But when is the IRQ object created and by whom? Is it created when a new PD session is created?
It is created by core, when a new SIGNAL session is opened to it. This is typically done during the startup of a new component. You are right, the request_semaphore() call then just transfers the IRQ object's capability from core to the requesting component.
- Does the IRQ object carry any information? Do I need to checkpoint
this information in order to be able to recreate the object properly during a restore process? Is the IRQ object created automatically (and i only have to make sure that the object is getting mapped into the object space of the target) or do i have to create it manually?
The IRQ object does not carry information, but its state changes when a thread attaches or detaches from it. So if you re-create that specific IRQ object, the signal handler thread that is using the signal source client has to attach again to the replaced IRQ object.
This seems to be very convenient for a restore procedure.
As being said, this specific IRQ object is part of the SIGNAL session and its client state. I'm not sure how your restore mechanism works exactly, but if work is done within the component that gets restored, you can do the re-attachment there. Otherwise, you would need to change the request_semaphore call. So that the information of which thread gets attached is part of the server side in core. Instead of attaching to the IRQ object itself, the signal handler thread transfers its identity to core via request_semaphore. Core attaches the thread and delivers the capability. Whenever request_semaphore is called, you detach formerly attached threads, as well as when the session is closed. Does that make sense for you?
Regards Stefan
I think i understand what you are describing here and i will try to modify the request_semaphore call in that way.
Again, thank you!
Kind Regards, David
Kind Regards, David
Am 29.03.2017 um 14:05 schrieb Stefan Kalkowski:
Hello Dennis,
On 03/27/2017 04:14 PM, Denis Huber wrote:
Dear Genode community,
Preliminary: We implemented a Checkpoint/Restore mechanism on basis of Genode/Fiasco.OC (Thanks to the great help of you all). We store the state of the target component by monitoring its RPC function calls which go through the parent component (= our Checkpoint/Restore component). The capability space is indirectly checkpointed through the capability map. The restoring of the state of the target is done by restoring the RPC objects used by the target component (e.g. PD session, dataspaces, region maps, etc.). The capabilities of the restored objects have to be also restored in the capability space (kernel) and in the capability map (userspace).
For restoring the target component Norman suggested the usage of the Genode::Child constructor with an invalid ROM dataspace capability which does not trigger the bootstrap mechanism. Thus, we have the full control of inserting the capabilities of the restored RPC objects into the capability space/map.
Our problem is the following: We restore the RPC objects and insert them into the capability map and then in the capability space. From the kernel point of view these capabilities are all "IPC Gates". Unfortunately, there was also an IRQ kernel object created by the bootstrap mechanism. The following table shows the kernel debugger output of the capability space of the freshly bootstraped target component:
000204 :0016e* Gate 0015f* Gate 00158* Gate 00152* Gate 000208 :00154* Gate 0017e* Gate 0017f* Gate 00179* Gate 00020c :00180* Gate 00188* Gate -- -- 000210 : -- -- 0018a* Gate 0018c* Gate 000214 :0018e* Gate 00196* Gate 00145* Gate 00144* IRQ 000218 :00198* Gate -- -- -- 00021c : -- 0019c* Gate -- --
At address 000217 you can see the IRQ kernel object. What does this object do, how can we store/monitor it, and how can it be restored? Where can we find the source code which creates this object in Genode's bootstrap code?
The IRQ kernel object you refer to is used by the "signal_handler" thread to block for signals of core's corresponding service. It is a base-foc specific internal core RPC object[1] that is used by the signal handler[2] and the related capability gets returned by the call to 'alloc_signal_source()' provided by the PD session[3].
I have to admit, I did not follow your current implementation approach in depth. Thereby, I do not know how to exactly handle this specific signal hander thread and its semaphore-like IRQ object, but maybe the references already help you further.
Regards Stefan
[1] repos/base-foc/src/core/signal_source_component.cc [2] repos/base-foc/src/lib/base/signal_source_client.cc [3] repos/base/src/core/include/pd_session_component.h
Best regards, Denis
On 11.12.2016 13:01, Denis Huber wrote:
Hello Norman,
> What you observe here is the ELF loading of the child's binary. As > part > of the 'Child' object, the so-called '_process' member is > constructed. > You can find the corresponding code at > 'base/src/lib/base/child_process.cc'. The code parses the ELF > executable > and loads the program segments, specifically the read-only text > segment > and the read-writable data/bss segment. For the latter, a RAM > dataspace > is allocated and filled with the content of the ELF binary's data. In > your case, when resuming, this procedure is wrong. After all, you > want > to supply the checkpointed data to the new child, not the initial > data > provided by the ELF binary. > > Fortunately, I encountered the same problem when implementing fork > for > noux. I solved it by letting the 'Child_process' constructor > accept an > invalid dataspace capability as ELF argument. This has two effects: > First, the ELF loading is skipped (obviously - there is no ELF to > load). > And second the creation of the initial thread is skipped as well. > > In short, by supplying an invalid dataspace capability as binary > for the > new child, you avoid all those unwanted operations. The new child > will > not start at 'Component::construct'. You will have to manually create > and start the threads of the new child via the PD and CPU session > interfaces. Thank you for the hint. I will try out your approach
> The approach looks good. I presume that you encounter > base-foc-specific > peculiarities of the thread-creation procedure. I would try to follow > the code in 'base-foc/src/core/platform_thread.cc' to see what the > interaction of core with the kernel looks like. The order of > operations > might be important. > > One remaining problem may be that - even though you may by able the > restore most part of the thread state - the kernel-internal state > cannot > be captured. E.g., think of a thread that was blocking in the > kernel via > 'l4_ipc_reply_and_wait' when checkpointed. When resumed, the new > thread > can naturally not be in this blocking state because the kernel's > state > is not part of the checkpointed state. The new thread would possibly > start its execution at the instruction pointer of the syscall and > issue > system call again, but I am not sure what really happens in practice. Is there a way to avoid this situation? Can I postpone the checkpoint by letting the entrypoint thread finish the intercepted RPC function call, then increment the ip of child's thread to the next command?
> I think that you don't need the LOG-session quirk if you follow my > suggestion to skip the ELF loading for the restored component > altogether. Could you give it a try? You are right, the LOG-session quirk seems a bit clumsy. I like your idea of skipping the ELF loading and automated creation of CPU threads more, because it gives me the control to create and start the threads from the stored ip and sp.
Best regards, Denis
Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan,
As being said, this specific IRQ object is part of the SIGNAL session and its client state. I'm not sure how your restore mechanism works exactly, but if work is done within the component that gets restored, you can do the re-attachment there. Otherwise, you would need to change the request_semaphore call. So that the information of which thread gets attached is part of the server side in core. Instead of attaching to the IRQ object itself, the signal handler thread transfers its identity to core via request_semaphore. Core attaches the thread and delivers the capability. Whenever request_semaphore is called, you detach formerly attached threads, as well as when the session is closed. Does that make sense for you?
At the moment the l4_irq_attach(...) call takes place in the constructor of Signal_source_client. [/repos/base-foc/src/lib/base/signal_source_client.cc].
The call needs two Fiasco::l4_cap_idx_t (kcap) to determine the IRQ object and the thread which should be attached to it.
When moving the l4_irq_attach call to the server side, these kcaps are obviously different to the ones at client side. For the IRQ i simply use the _blocking_semaphore native capability to find out the kcap. But i failed to find a way to determine the kcap of the thread at the server side.
Therefore my question is if you could you go in a bit more detail about how you would realize the "transfer its identity to core" part. Do i have to transfer the thread name, its capability or something else?
I tried a few things but i couldn't make it work.
Kind Regards, David
Hi David,
On 06/29/2017 03:08 PM, David Werner wrote:
Hi Stefan,
As being said, this specific IRQ object is part of the SIGNAL session and its client state. I'm not sure how your restore mechanism works exactly, but if work is done within the component that gets restored, you can do the re-attachment there. Otherwise, you would need to change the request_semaphore call. So that the information of which thread gets attached is part of the server side in core. Instead of attaching to the IRQ object itself, the signal handler thread transfers its identity to core via request_semaphore. Core attaches the thread and delivers the capability. Whenever request_semaphore is called, you detach formerly attached threads, as well as when the session is closed. Does that make sense for you?
At the moment the l4_irq_attach(...) call takes place in the constructor of Signal_source_client. [/repos/base-foc/src/lib/base/signal_source_client.cc].
The call needs two Fiasco::l4_cap_idx_t (kcap) to determine the IRQ object and the thread which should be attached to it.
When moving the l4_irq_attach call to the server side, these kcaps are obviously different to the ones at client side. For the IRQ i simply use the _blocking_semaphore native capability to find out the kcap. But i failed to find a way to determine the kcap of the thread at the server side.
Therefore my question is if you could you go in a bit more detail about how you would realize the "transfer its identity to core" part. Do i have to transfer the thread name, its capability or something else?
I tried a few things but i couldn't make it work.
What I meant with:
"... the signal handler thread transfers its identity to core via request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski:
What I meant with:
"... the signal handler thread transfers its identity to core via request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner:
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski:
What I meant with:
"... the signal handler thread transfers its identity to core via request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
On 07/05/2017 02:00 PM, David Werner wrote:
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Yes.
Regards Stefan
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner:
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski:
What I meant with:
"... the signal handler thread transfers its identity to core via request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan,
i modified the request semaphore call and changed the constructor of the Signal_source_client as follows:
[/repos/base-foc/src/lib/base/signal_source_client.cc] :
Signal_source_client::Signal_source_client(Capability<Signal_source> cap) : Rpc_client<Foc_signal_source>(static_cap_cast<Foc_signal_source>(cap)) { /* request mapping of semaphore capability selector */ _sem = call<Rpc_request_semaphore>(Thread::myself()->cap()); }
[/repos/base-foc/src/include/signal_source/rpc_object.h] :
Native_capability _request_semaphore(Thread_capability tcap) {
Fiasco::l4_irq_detach(_blocking_semaphore.data()->kcap());
Fiasco::l4_msgtag_t tag = Fiasco::l4_irq_attach(_blocking_semaphore.data()->kcap(), 0, tcap.data()->cap()); if (l4_error(tag)) Genode::raw("l4_irq_attach failed with ", l4_error(tag));
return _blocking_semaphore;
}
Unfortunately i run into problems if a component uses a timer. As soon as timer.sleep is called the component waits forever.
After taking a look at some code (especially Platform_thread) I think the problem might be that i now use the kcap of the thread capability for the l4_irq_attach call instead of the kcap of the corresponding gate capability.
Is that right or might there be another reason for my timer problem?
Kind Regards,
David
Am 24.07.2017 um 14:18 schrieb Stefan Kalkowski:
On 07/05/2017 02:00 PM, David Werner wrote:
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Yes.
Regards Stefan
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner:
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski:
What I meant with:
"... the signal handler thread transfers its identity to core via
request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi,
let me rephrase my question. Is "Thread::myself()->native_thread().kcap" returning the kcap of another capability than "Thread::myself->cap().data()->kcap()" ? As far as i understand yes. The former returns the kcap of the gate capability while the latter returns the capability of the thread object capability.
If that is true, is there a possibility to access the gate capability from withtin the "Signal_source_rpc_object" [/repos/base-foc/src/include/signal_source/rpc_object.h] ?
Kind Regards,
David
Am 24.10.2017 um 02:11 schrieb David Werner:
Hi Stefan,
i modified the request semaphore call and changed the constructor of the Signal_source_client as follows:
[/repos/base-foc/src/lib/base/signal_source_client.cc] :
Signal_source_client::Signal_source_client(Capability<Signal_source> cap) : Rpc_client<Foc_signal_source>(static_cap_cast<Foc_signal_source>(cap)) { /* request mapping of semaphore capability selector */ _sem = call<Rpc_request_semaphore>(Thread::myself()->cap()); }
[/repos/base-foc/src/include/signal_source/rpc_object.h] :
Native_capability _request_semaphore(Thread_capability tcap) {
Fiasco::l4_irq_detach(_blocking_semaphore.data()->kcap()); Fiasco::l4_msgtag_t tag =
Fiasco::l4_irq_attach(_blocking_semaphore.data()->kcap(), 0, tcap.data()->cap()); if (l4_error(tag)) Genode::raw("l4_irq_attach failed with ", l4_error(tag));
return _blocking_semaphore;
}
Unfortunately i run into problems if a component uses a timer. As soon as timer.sleep is called the component waits forever.
After taking a look at some code (especially Platform_thread) I think the problem might be that i now use the kcap of the thread capability for the l4_irq_attach call instead of the kcap of the corresponding gate capability.
Is that right or might there be another reason for my timer problem?
Kind Regards,
David
Am 24.07.2017 um 14:18 schrieb Stefan Kalkowski:
On 07/05/2017 02:00 PM, David Werner wrote:
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Yes.
Regards Stefan
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner:
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski:
What I meant with:
"... the signal handler thread transfers its identity to core via
request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi David
On 11/02/2017 10:35 AM, David Werner wrote:
Hi,
let me rephrase my question. Is "Thread::myself()->native_thread().kcap" returning the kcap of another capability than "Thread::myself->cap().data()->kcap()" ? As far as i understand yes. The former returns the kcap of the gate capability while the latter returns the capability of the thread object capability.
If that is true, is there a possibility to access the gate capability from withtin the "Signal_source_rpc_object" [/repos/base-foc/src/include/signal_source/rpc_object.h] ?
sorry for my late response!
In fact, Thread::myself->cap() delivers the capability to the thread object representation in core, not to mix up with the kernel-capability that refers the actual thread! Therefore, I advised to send this capability to core, and retrieve the "real" thread's capability out of the Cpu session's object that belongs to the thread. You can use the core's entrypoint, which is the same for Signal, Irq, Cpu, etc. to retrieve the right Cpu_thread_component from the capability delivered by the signal source client. Similar to this snippet:
entrypoint.apply(delivered_thread_cap, [&] (Cpu_thread_component *t) { if (t) ... // attach t->platform_thread().thread().local };
You can store the entrypoint within the Genode::Signal_source_rpc_object by extending its constructor. Cpu_thread_component contains the Platform_thread object of the thread including its "real" kernel capability.
A shortcut might be to propagate the gate capability to core instead, simply by using native_thread(), which you already mentioned. In that case, you can simply attach the received capability to the irq object. But this is just a proof-of-concept implementation and inherently insecure, as it opens up the possibility to attach signal sources to arbitrary threads including threads of other protection domains.
Regards Stefan
Kind Regards,
David
Am 24.10.2017 um 02:11 schrieb David Werner:
Hi Stefan,
i modified the request semaphore call and changed the constructor of the Signal_source_client as follows:
[/repos/base-foc/src/lib/base/signal_source_client.cc] :
Signal_source_client::Signal_source_client(Capability<Signal_source> cap) : Rpc_client<Foc_signal_source>(static_cap_cast<Foc_signal_source>(cap)) { /* request mapping of semaphore capability selector */ _sem = call<Rpc_request_semaphore>(Thread::myself()->cap()); }
[/repos/base-foc/src/include/signal_source/rpc_object.h] :
Native_capability _request_semaphore(Thread_capability tcap) {
Fiasco::l4_irq_detach(_blocking_semaphore.data()->kcap());
Fiasco::l4_msgtag_t tag = Fiasco::l4_irq_attach(_blocking_semaphore.data()->kcap(), 0, tcap.data()->cap()); if (l4_error(tag)) Genode::raw("l4_irq_attach failed with ", l4_error(tag));
return _blocking_semaphore;
}
Unfortunately i run into problems if a component uses a timer. As soon as timer.sleep is called the component waits forever.
After taking a look at some code (especially Platform_thread) I think the problem might be that i now use the kcap of the thread capability for the l4_irq_attach call instead of the kcap of the corresponding gate capability.
Is that right or might there be another reason for my timer problem?
Kind Regards,
David
Am 24.07.2017 um 14:18 schrieb Stefan Kalkowski:
On 07/05/2017 02:00 PM, David Werner wrote:
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Yes.
Regards Stefan
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner:
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski:
What I meant with:
"... the signal handler thread transfers its identity to core via request_semaphore..."
was that you add an additional argument to the request_semaphore call, which is the CPU session's thread capability of the calling thread, like this:
Thread::myself()->cap()
Core can therewith retrieve the "real" thread capability, in terms of the kernel's thread capability, and attach that to the IRQ object. I hope I could get my ideas across to you.
Best regards Stefan
Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan,
thank you for your response. It helped me a lot!
I extended the constructor of of Genode::Signal_source_rpc_object without any problems.
When i try to access the Cpu_thread_component pointer t within the lambda method which is handed to "entrypoint.apply(...)" i get the following compiler issues:
COMPILE cpu_session_component.o In file included from /home/david/genode-kia4sm/repos/base/src/core/include/signal_source_component.h:17:0, from /home/david/genode-kia4sm/repos/base/src/core/include/signal_broker.h:17, from /home/david/genode-kia4sm/repos/base/src/core/include/pd_session_component.h:30, from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_thread_component.h:26, from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_session_component.h:27, from /home/david/genode-kia4sm/repos/base/src/core/cpu_session_component.cc:21: /home/david/genode-kia4sm/repos/base-focnados/src/include/signal_source/rpc_object.h: In lambda function: /home/david/genode-kia4sm/repos/base-focnados/src/include/signal_source/rpc_object.h:54:14: error: invalid use of incomplete type âclass Genode::Cpu_thread_componentâ return t->platform_thread().thread().local.data()->kcap(); ^ In file included from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_thread_component.h:25:0, from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_session_component.h:27, from /home/david/genode-kia4sm/repos/base/src/core/cpu_session_component.cc:21: /home/david/genode-kia4sm/repos/base/src/core/include/cpu_thread_allocator.h:26:8: error: forward declaration of âclass Genode::Cpu_thread_componentâ class Cpu_thread_component; ^ /home/david/genode-kia4sm/repos/base/mk/generic.mk:56: die Regel für Ziel âcpu_session_component.oâ scheiterte
I have no idea what this is trying to tell me. Do you have an idea what the problem here is?
Kind regards, David
Am 02.11.2017 um 14:03 schrieb Stefan Kalkowski:
Hi David
On 11/02/2017 10:35 AM, David Werner wrote:
Hi,
let me rephrase my question. Is "Thread::myself()->native_thread().kcap" returning the kcap of another capability than "Thread::myself->cap().data()->kcap()" ? As far as i understand yes. The former returns the kcap of the gate capability while the latter returns the capability of the thread object capability.
If that is true, is there a possibility to access the gate capability from withtin the "Signal_source_rpc_object" [/repos/base-foc/src/include/signal_source/rpc_object.h] ?
sorry for my late response!
In fact, Thread::myself->cap() delivers the capability to the thread object representation in core, not to mix up with the kernel-capability that refers the actual thread! Therefore, I advised to send this capability to core, and retrieve the "real" thread's capability out of the Cpu session's object that belongs to the thread. You can use the core's entrypoint, which is the same for Signal, Irq, Cpu, etc. to retrieve the right Cpu_thread_component from the capability delivered by the signal source client. Similar to this snippet:
entrypoint.apply(delivered_thread_cap, [&] (Cpu_thread_component *t) { if (t) ... // attach t->platform_thread().thread().local };
You can store the entrypoint within the Genode::Signal_source_rpc_object by extending its constructor. Cpu_thread_component contains the Platform_thread object of the thread including its "real" kernel capability.
A shortcut might be to propagate the gate capability to core instead, simply by using native_thread(), which you already mentioned. In that case, you can simply attach the received capability to the irq object. But this is just a proof-of-concept implementation and inherently insecure, as it opens up the possibility to attach signal sources to arbitrary threads including threads of other protection domains.
Regards Stefan
Kind Regards,
David
Am 24.10.2017 um 02:11 schrieb David Werner:
Hi Stefan,
i modified the request semaphore call and changed the constructor of the Signal_source_client as follows:
[/repos/base-foc/src/lib/base/signal_source_client.cc] :
Signal_source_client::Signal_source_client(Capability<Signal_source> cap) : Rpc_client<Foc_signal_source>(static_cap_cast<Foc_signal_source>(cap)) { /* request mapping of semaphore capability selector */ _sem = call<Rpc_request_semaphore>(Thread::myself()->cap()); }
[/repos/base-foc/src/include/signal_source/rpc_object.h] :
Native_capability _request_semaphore(Thread_capability tcap) {
Fiasco::l4_irq_detach(_blocking_semaphore.data()->kcap()); Fiasco::l4_msgtag_t tag =
Fiasco::l4_irq_attach(_blocking_semaphore.data()->kcap(), 0, tcap.data()->cap()); if (l4_error(tag)) Genode::raw("l4_irq_attach failed with ", l4_error(tag));
return _blocking_semaphore;
}
Unfortunately i run into problems if a component uses a timer. As soon as timer.sleep is called the component waits forever.
After taking a look at some code (especially Platform_thread) I think the problem might be that i now use the kcap of the thread capability for the l4_irq_attach call instead of the kcap of the corresponding gate capability.
Is that right or might there be another reason for my timer problem?
Kind Regards,
David
Am 24.07.2017 um 14:18 schrieb Stefan Kalkowski:
On 07/05/2017 02:00 PM, David Werner wrote:
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Yes.
Regards Stefan
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner:
Hi Stefan,
Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski: > What I meant with: > > "... the signal handler thread transfers its identity to core via > request_semaphore..." > > was that you add an additional argument to the request_semaphore > call, > which is the CPU session's thread capability of the calling > thread, like > this: > > Thread::myself()->cap() > > Core can therewith retrieve the "real" thread capability, in terms of > the kernel's thread capability, and attach that to the IRQ object. > I hope I could get my ideas across to you. > > Best regards > Stefan > Thank you! I modified the request semaphore call according to your suggestion.
My only problem is that i don't get how to use the transfered thread capability to retrieve the kernel's thread capability. More precisely, i'm not able to figure out how to determine the correct kcap which i have to use in the l4_irq_attach call (which is now on core's side).
Could you give me some more advise on that?
Kind Regards, David
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi David,
On 11/15/2017 08:56 AM, David Werner wrote:
Hi Stefan,
thank you for your response. It helped me a lot!
I extended the constructor of of Genode::Signal_source_rpc_object without any problems.
When i try to access the Cpu_thread_component pointer t within the lambda method which is handed to "entrypoint.apply(...)" i get the following compiler issues:
   COMPILE cpu_session_component.o In file included from /home/david/genode-kia4sm/repos/base/src/core/include/signal_source_component.h:17:0,
                from /home/david/genode-kia4sm/repos/base/src/core/include/signal_broker.h:17,                 from /home/david/genode-kia4sm/repos/base/src/core/include/pd_session_component.h:30,
                from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_thread_component.h:26,
                from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_session_component.h:27,
                from /home/david/genode-kia4sm/repos/base/src/core/cpu_session_component.cc:21: /home/david/genode-kia4sm/repos/base-focnados/src/include/signal_source/rpc_object.h: In lambda function: /home/david/genode-kia4sm/repos/base-focnados/src/include/signal_source/rpc_object.h:54:14: error: invalid use of incomplete type âclass Genode::Cpu_thread_componentâ      return t->platform_thread().thread().local.data()->kcap();
this is the most interesting information here. The compiler tells you it does not know enough about the class Genode::Cpu_thread_component at this point. See a some lines further down.
             ^ In file included from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_thread_component.h:25:0,
                from /home/david/genode-kia4sm/repos/base/src/core/include/cpu_session_component.h:27,
                from /home/david/genode-kia4sm/repos/base/src/core/cpu_session_component.cc:21: /home/david/genode-kia4sm/repos/base/src/core/include/cpu_thread_allocator.h:26:8: error: forward declaration of âclass Genode::Cpu_thread_componentâ  class Cpu_thread_component;
As you can see here, the compiler has only information via this forward declaration of the class Cpu_thread_component. The full declaration is missing, because the header containing Cpu_thread_component could not be included here - for reasons.
A simple solution would be to remove the implementation of the constructor, and just declare it within the header file. Move the implementation of the constructor to some appropriated compilation unit (*.cc) within core, and include the header containing Cpu_thread_component within it.
Regards Stefan
       ^ /home/david/genode-kia4sm/repos/base/mk/generic.mk:56: die Regel für Ziel âcpu_session_component.oâ scheiterte
I have no idea what this is trying to tell me. Do you have an idea what the problem here is?
Kind regards, David
Am 02.11.2017 um 14:03 schrieb Stefan Kalkowski:
Hi David
On 11/02/2017 10:35 AM, David Werner wrote:
Hi,
let me rephrase my question. Is "Thread::myself()->native_thread().kcap" returning the kcap of another capability than "Thread::myself->cap().data()->kcap()" ? As far as i understand yes. The former returns the kcap of the gate capability while the latter returns the capability of the thread object capability.
If that is true, is there a possibility to access the gate capability from withtin the "Signal_source_rpc_object" [/repos/base-foc/src/include/signal_source/rpc_object.h] ?
sorry for my late response!
In fact, Thread::myself->cap() delivers the capability to the thread object representation in core, not to mix up with the kernel-capability that refers the actual thread! Therefore, I advised to send this capability to core, and retrieve the "real" thread's capability out of the Cpu session's object that belongs to the thread. You can use the core's entrypoint, which is the same for Signal, Irq, Cpu, etc. to retrieve the right Cpu_thread_component from the capability delivered by the signal source client. Similar to this snippet:
  entrypoint.apply(delivered_thread_cap, [&] (Cpu_thread_component *t) {     if (t) ... // attach t->platform_thread().thread().local   };
You can store the entrypoint within the Genode::Signal_source_rpc_object by extending its constructor. Cpu_thread_component contains the Platform_thread object of the thread including its "real" kernel capability.
A shortcut might be to propagate the gate capability to core instead, simply by using native_thread(), which you already mentioned. In that case, you can simply attach the received capability to the irq object. But this is just a proof-of-concept implementation and inherently insecure, as it opens up the possibility to attach signal sources to arbitrary threads including threads of other protection domains.
Regards Stefan
Kind Regards,
David
Am 24.10.2017 um 02:11 schrieb David Werner:
Hi Stefan,
i modified the request semaphore call and changed the constructor of the Signal_source_client as follows:
[/repos/base-foc/src/lib/base/signal_source_client.cc] :
Signal_source_client::Signal_source_client(Capability<Signal_source> cap) : Rpc_client<Foc_signal_source>(static_cap_cast<Foc_signal_source>(cap)) { Â Â Â Â /* request mapping of semaphore capability selector */ Â Â Â Â _sem = call<Rpc_request_semaphore>(Thread::myself()->cap()); }
[/repos/base-foc/src/include/signal_source/rpc_object.h] :
Native_capability _request_semaphore(Thread_capability tcap) {
    Fiasco::l4_irq_detach(_blocking_semaphore.data()->kcap());
    Fiasco::l4_msgtag_t tag = Fiasco::l4_irq_attach(_blocking_semaphore.data()->kcap(), 0,                 tcap.data()->cap());     if (l4_error(tag))                 Genode::raw("l4_irq_attach failed with ", l4_error(tag));
    return _blocking_semaphore;
}
Unfortunately i run into problems if a component uses a timer. As soon as timer.sleep is called the component waits forever.
After taking a look at some code (especially Platform_thread) I think the problem might be that i now use the kcap of the thread capability for the l4_irq_attach call instead of the kcap of the corresponding gate capability.
Is that right or might there be another reason for my timer problem?
Kind Regards,
David
Am 24.07.2017 um 14:18 schrieb Stefan Kalkowski:
On 07/05/2017 02:00 PM, David Werner wrote:
Hi,
it seems to work if i use <capability>.data()->kcap(). Is that correct?
Yes.
Regards Stefan
Kind Regards, David
Am 05.07.2017 um 13:46 schrieb David Werner: > Hi Stefan, > > Am 29.06.2017 um 18:18 schrieb Stefan Kalkowski: >> What I meant with: >> >> Â Â Â Â "... the signal handler thread transfers its identity to >> core via >> request_semaphore..." >> >> was that you add an additional argument to the request_semaphore >> call, >> which is the CPU session's thread capability of the calling >> thread, like >> this: >> >> Â Â Â Â Thread::myself()->cap() >> >> Core can therewith retrieve the "real" thread capability, in >> terms of >> the kernel's thread capability, and attach that to the IRQ object. >> I hope I could get my ideas across to you. >> >> Best regards >> Stefan >> > Thank you! I modified the request semaphore call according to your > suggestion. > > My only problem is that i don't get how to use the transfered thread > capability to retrieve the kernel's thread capability. > More precisely, i'm not able to figure out how to determine the > correct kcap which i have to use in the l4_irq_attach call (which is > now on core's side). > > Could you give me some more advise on that? > > Kind Regards, > David > > > > ------------------------------------------------------------------------------ > > > > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > genode-main mailing list > genode-main@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/genode-main
>
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main