Hi Everyone,
We're porting the Genode timer to the riscv architecture. We've created a prototype which works, however we have some concerns regarding the integration with Genode, and whether timer features may or may not work correctly.
Looking at the examples from other architectures the common way to port the timer is through a hardware time controller which is available through memio.
The riscv core does not have a time controller available in its default configuration. (riscv offers a register based approach, however those registers are not accessible by the timer process since it does not run with the right cpu privileges).
In order to get the the timer working on riscv we created a software time controller which works via memio. The time operations are performed by the scheduler which does have the right privileges to access the riscv core's timer registers.
This is the approach we took: - Declare a region of normal memory as io-memory (our virtual/software time controller). ( through modification of base-hw/src/core/spec/riscv/platform_support.cc ) - Give the timer process MEM_IO and IRQ access, and access the memory region, to 1) read the time from a specific address, 2) set the alarm time and sleep waiting for a signal to be woken up by setting the timer process to wait for an unused irq with number IRQ#. - Modify the scheduling function which is executing with Supervisor privileges, to perform the following tasks whenever the job scheduler is called: 1) read the time from the register write it to the io-memory region 2) read the alarm value from another address in the io-memory region, and if it's set: compare this to the current time and and if the alarm time has expired signal User_irq with number IRQ#. ( through modifications in base-hw/src/core/kernel/cpu.cc ) - Timer wakes up and continues to-do whatever timers do.
This approach works in the spike simulator: We can run the run/hello target, where the process blocks for a while, and wakes up by the timer again.
Our concern is with the following items: - Will this support all the features a timer needs? Did we take the right approach here? - Can we be sure the memory we now allocated for our software time controller, won't be allocated for other uses later? - We've made some riscv-specific changes in non risc-v source code. Is the scheduler the best place to make these changes? Is there a possibility to move these changes to a riscv specific directory and call these with a callback, so that the code still works for other platforms.
Regards, Menno
Hi Menno,
On 02/18/2016 01:17 PM, Menno Valkema wrote:
Hi Everyone,
We're porting the Genode timer to the riscv architecture. We've created a prototype which works, however we have some concerns regarding the integration with Genode, and whether timer features may or may not work correctly.
Looking at the examples from other architectures the common way to port the timer is through a hardware time controller which is available through memio.
The riscv core does not have a time controller available in its default configuration. (riscv offers a register based approach, however those registers are not accessible by the timer process since it does not run with the right cpu privileges).
In order to get the the timer working on riscv we created a software time controller which works via memio. The time operations are performed by the scheduler which does have the right privileges to access the riscv core's timer registers.
This is the approach we took:
- Declare a region of normal memory as io-memory (our virtual/software
time controller). ( through modification of base-hw/src/core/spec/riscv/platform_support.cc )
- Give the timer process MEM_IO and IRQ access, and access the memory
region, to 1) read the time from a specific address, 2) set the alarm time and sleep waiting for a signal to be woken up by setting the timer process to wait for an unused irq with number IRQ#.
- Modify the scheduling function which is executing with Supervisor
privileges, to perform the following tasks whenever the job scheduler is called: 1) read the time from the register write it to the io-memory region 2) read the alarm value from another address in the io-memory region, and if it's set: compare this to the current time and and if the alarm time has expired signal User_irq with number IRQ#. ( through modifications in base-hw/src/core/kernel/cpu.cc )
- Timer wakes up and continues to-do whatever timers do.
This approach works in the spike simulator: We can run the run/hello target, where the process blocks for a while, and wakes up by the timer again.
Our concern is with the following items:
- Will this support all the features a timer needs? Did we take the
right approach here?
- Can we be sure the memory we now allocated for our software time
controller, won't be allocated for other uses later?
- We've made some riscv-specific changes in non risc-v source code. Is
the scheduler the best place to make these changes? Is there a possibility to move these changes to a riscv specific directory and call these with a callback, so that the code still works for other platforms.
We are thinking about a kernel timer mechanism as well. In my opinion, the way to go would be to implement something similar to the Nova approach (have a look at 'repos/os/src/drivers/timer/spec/nova'), where you have a generic semaphore and can perform a down operation with a given timeout. It does not have to be a semaphore, but could also be a signal with a timeout. This way all base-hw platforms could take advantage of this feature. I think the scheduler might be the right place to implement this (in generic code).
@Martin: Any thoughts about this?
Regards,
Sebastian
Hi Menno, Sebastian,
On 02/18/2016 01:17 PM, Menno Valkema wrote:
- Will this support all the features a timer needs? Did we take the
right approach here?
I think yes. AFAIK, there must be a way to install a timeout that provides asynchronous feedback (it's not necessary to use IRQs for that) and it must be possible to sample the timer value plus the according overrun status. The overrun status is the information whether the last installed timeout was expired at the time of the sample. The overrun status is reset when a new timeout gets installed.
- Can we be sure the memory we now allocated for our software time
controller, won't be allocated for other uses later?
Yes. If you provide your IO memory through the IOMEM service and try to open a second session to it, the server complains like ...
I/O memory [x,y) not available Local MMIO mapping failed!
... and sends a Genode::Parent::Service_denied exception to the client.
- We've made some riscv-specific changes in non risc-v source code. Is
the scheduler the best place to make these changes? Is there a possibility to move these changes to a riscv specific directory and call these with a callback, so that the code still works for other platforms.
As Sebastian said, we're playing with the idea to move to a kernel-driven timer in general or at least in base-hw. But it is not clear now whether this idea actually brings the benefits we're hoping for and by now, there's also no one working at it. Thus, I would suggest to move the RISCV-specific implementation into separate files for now. You can do this by declaring the hooks in the generic header [1] and implementing them in a new RISCV-specific unit [2]. You would then have to add the unit to 'SRC_CC' in [3].
Cheers, Martin
[1] base-hw/src/core/include/kernel/cpu_scheduler.h [2] base-hw/src/core/spec/riscv/kernel/cpu_scheduler.cc [3] base-hw/lib/mk/spec/riscv/core.mk
Hi Martin, Sebastian,
Thank you for thinking along. Your replies give us confidence in our current implementation, so that we can move forward with it.
A general kernel timer interface would probably help for this specific cpu architecture in the future, however as this does not exist presently we will stick to our current approach for now.
Thanks, Menno
On 19-02-16 17:45, Martin Stein wrote:
Hi Menno, Sebastian,
On 02/18/2016 01:17 PM, Menno Valkema wrote:
- Will this support all the features a timer needs? Did we take the
right approach here?
I think yes. AFAIK, there must be a way to install a timeout that provides asynchronous feedback (it's not necessary to use IRQs for that) and it must be possible to sample the timer value plus the according overrun status. The overrun status is the information whether the last installed timeout was expired at the time of the sample. The overrun status is reset when a new timeout gets installed.
- Can we be sure the memory we now allocated for our software time
controller, won't be allocated for other uses later?
Yes. If you provide your IO memory through the IOMEM service and try to open a second session to it, the server complains like ...
I/O memory [x,y) not available Local MMIO mapping failed!
... and sends a Genode::Parent::Service_denied exception to the client.
- We've made some riscv-specific changes in non risc-v source code. Is
the scheduler the best place to make these changes? Is there a possibility to move these changes to a riscv specific directory and call these with a callback, so that the code still works for other platforms.
As Sebastian said, we're playing with the idea to move to a kernel-driven timer in general or at least in base-hw. But it is not clear now whether this idea actually brings the benefits we're hoping for and by now, there's also no one working at it. Thus, I would suggest to move the RISCV-specific implementation into separate files for now. You can do this by declaring the hooks in the generic header [1] and implementing them in a new RISCV-specific unit [2]. You would then have to add the unit to 'SRC_CC' in [3].
Cheers, Martin
[1] base-hw/src/core/include/kernel/cpu_scheduler.h [2] base-hw/src/core/spec/riscv/kernel/cpu_scheduler.cc [3] base-hw/lib/mk/spec/riscv/core.mk
Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main