hi! i have 5 gpio keys,they are from module GPIO2,and share the same IRQ line MA_IRQ_30.how can i implement the gipo key driver? i use code below to init gpio and irq void _init_gpio(int gpio) { // Configure GPIO _gpio.direction_input(gpio); _gpio.debouncing_time(gpio, 31*100); _gpio.debounce_enable(gpio, 1); _gpio.falling_detect(gpio, 1); _gpio.irq_enable(gpio, 1); } the "int gpio" will be 32,33,36,37,38 when i use _gpio.irq_sigh(_sig_rec.manage(&_sig_ctx), gpio); problem shows:should i use it only once or fifth? if i use it only once,how can i determine which gpio trigger the irq?may be use function _gpio.datain(init gpio) to detect it? another question: my touchsreen and the 5 gpio keys share the same IRQ line MA_IRQ_30.how can deal with the tow kinds of driver? in/os/drivers/input/,make two directoris named ft5406 and gpio_keys?when the irq comes?what will it be?which void handle_event() will execute?
thanks very much!
Hi,
On 05/11/2013 01:42 PM, gaober wrote:
hi! i have 5 gpio keys,they are from module GPIO2,and share the same IRQ line MA_IRQ_30.how can i implement the gipo key driver? i use code below to init gpio and irq void _init_gpio(int gpio) { // Configure GPIO _gpio.direction_input(gpio); _gpio.debouncing_time(gpio, 31*100); _gpio.debounce_enable(gpio, 1); _gpio.falling_detect(gpio, 1); _gpio.irq_enable(gpio, 1); } the "int gpio" will be 32,33,36,37,38 when i use _gpio.irq_sigh(_sig_rec.manage(&_sig_ctx), gpio); problem shows:should i use it only once or fifth? if i use it only once,how can i determine which gpio trigger the irq?may be use function _gpio.datain(init gpio) to detect it?
If every GPIO key uses another GPIO pin, then of course you'll need to register a signal handler respectively a "Signal_context_capability" for every single pin, but that depends on your device whether it uses one pin or multiple.
The "datain(int gpio)" function, just tells you what input level the appropriate GPIO pin has (low or high).
To distinguish different signals (e.g. GPIO interrupts) hitting one and the same signal receiver, you've to use different Signal_context objects for each signal source. For each signal received, you get the appropriate "Signal_context" delivered. You can use the received context to compare it against each context you've registered beforehand to find out which pin was triggered.
another question: my touchsreen and the 5 gpio keys share the same IRQ line MA_IRQ_30.how can deal with the tow kinds of driver?
Well, I assume "MA_IRQ_30" denotes the interrupt line between the second GPIO controller, and the interrupt controller of the ARM core, and not the GPIO pin. This interrupt is delivered to the GPIO driver itself, and shouldn't be used by your touchscreen, nor GPIO key driver. The GPIO key driver, and touchscreen driver as clients of the GPIO driver should register signal handlers for their corresponding GPIO pins. When an interrupt occurs on line "MA_IRQ_30", the GPIO driver gets informed, and after reading the appropriated device registers, decides which device (respectively pin) connected to the GPIO controller was responsible for the interrupt. Then it triggers a signal to inform the right client, either the touchscreen signal handler, or one of the GPIO key handlers.
in/os/drivers/input/,make two directoris named ft5406 and gpio_keys?when the irq comes?what will it be?which void handle_event() will execute?
In general, I would recommend to combine both devices into one and the same input driver. In fact, both devices are just different sources to the input system. Probably, you want e.g. the window manager "nitpicker" to get both kinds of input events, so they can be distributed to nitpicker clients. If you've two different input driver, you'll have to rework nitpicker to take more than one input driver.
Regards Stefan
PS: Btw. the GPIO session interface was rewritten entirely. The changes are now laying in the staging branch of the official Genode repository, but will migrate to master probably soon. The new interface allow for exactly one GPIO pin per GPIO session. Moreover, the interface is slightly more narrow: https://github.com/genodelabs/genode/commit/ca92984bccf5e12c970e04fc2923bd19...
thanks very much!
Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. This 200-page book is written by three acclaimed leaders in the field. The early access version is available now. Download your free book today! http://p.sf.net/sfu/neotech_d2d_may
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
hi!
In general, I would recommend to combine both devices into one and the same input driver. In fact, both devices are just different sources to >the input system. Probably, you want e.g. the window manager "nitpicker" to get both kinds of input events, so they can be distributed to >nitpicker clients. If you've two different input driver, you'll have to rework nitpicker to take more than one input driver.
finally,i combine the tow device into one,make the nitpicker use input service route to the driver.it worked. but i find the android based mircrokernel is slower than native android without microkernel.is it because of the comunication of IPC? in genode,there is a lot of RPC to make it become server and client mode.what's the relationship between RPC and IPC?is that the reson that android become slower? thanks very much!
Hello,
finally,i combine the tow device into one,make the nitpicker use input service route to the driver.it worked. but i find the android based mircrokernel is slower than native android without microkernel.is it because of the comunication of IPC? in genode,there is a lot of RPC to make it become server and client mode.what's the relationship between RPC and IPC?is that the reson that android become slower?
on Genode, each RPC corresponds to an IPC call.
There is no general answer on how much the inter-process communication affects application performance. We always try to design the protocols between components in a way that IPC is used in a sensible way. For example, to communicate networking packets among components a Genode system, we don't use one IPC per packet but shared-memory buffers combined with asynchronous notifications. So many packet are passed from one component to another at once.
Empirically evidenced by our experiences with optimizing application performance in various scenarios, the speed of the IPC mechanism as provided by the kernel was hardly ever be a dominating factor.
I suppose that the visibly slow performance of L4Android compared to native Android can be mainly attributed to the different ways of how graphics stack works. On native Android, Android employs the GPU for rendering graphics, which is not only much faster than CPU-based rendering, but it also offloads work from the CPU. In contrast, L4Android processes all graphics rendering on the CPU an leaves the GPU unused.
Regards Norman
hi!
I suppose that the visibly slow performance of L4Android compared to native Android can be mainly attributed to the different ways of how >graphics stack works. On native Android, Android employs the GPU for rendering graphics, which is not only much faster than CPU-based >rendering, but it also offloads work from the CPU. In contrast, L4Android processes all graphics rendering on the CPU an leaves the GPU >unused.
thanks for reply. you mentioned GPU.how can i use the panda GPU?Maybe i should write a driver?or should i enhance the nitpicker and nitfb? i am not familiar with the GPU.will you let the genode system support GPU in the future?
Hello,
you mentioned GPU.how can i use the panda GPU?Maybe i should write a driver?or should i enhance the nitpicker and nitfb?
this is challenging because most GPU vendors (such as for the GPU used in OMAP4 / Pandaboard) do not publish documentation for their devices. Even though the in-kernel part of their drivers are available as Open Source code, the most part of the drivers comes in the form of libraries that are proprietary. This makes it difficult to enable those devices on non-Linux OSes.
For Intel GPUs (which are documented!), we have done preliminary work some years ago but we haven't followed up on this development. See:
http://genode.org/documentation/release-notes/10.08#Gallium3D_and_Intel%27s_...
i am not familiar with the GPU.will you let the genode system support GPU in the future?
Currently, there are no tangible plans to do so. The focus of our current line of work lies in different areas. See our road map:
http://genode.org/about/road-map
Technically, we'd actually be excited to work on the GPU topic. However, as this is a pretty huge undertaking, we won't be able to address it in the short term without external help or funding.
Regards Norman