Hi all,
during my work, I've written a vesa driver which is capable of serving multiple clients (using 'virtual' framebuffers, derived from your vesa_drv).
I always thought, that my 'Session_component' is created once when the first client opens a connection to my service, so that init calls 'create_session' in my 'Root_component' just once and never again.
That was obviously wrong. I've found out, that 'create_session' is called once for each client - not on creation of the connection but on the first ipc call the client invokes.
For my setting, this behaviour is not useful, so I've manually taken care of being a singleton by allocating my 'Root_component' only once in 'create-session' and handing out that single object subsequently on each following session creation.
I've noticed some strange behaviour with that setting.
(1) I saw that something complained about my singleton -- [init -> vesa_drv] void Genode::Avl_node<NT>::insert(Genode::Avl_node<NT>*) [wit h NT = Genode::Object_poolGenode::Server_object::Entry]: Inserting element 26f c twice into avl tree! --
But that did not seem to matter, so I ignored it...
(2) I had to identify my clients somehow to maintain state information. So I hand over a 'shared secret' in each ipc call. That's simply the dataspace cap the client got from me.
I noticed that something else than my clients called one of my methods during startup with an invalid secret.
-- [init -> vesa_drv] virtual void Framebuffer::Session_component::toggle(Genode::D ataspace_capability): Framebuffer::Session_component::toggle 4b383d61h [init -> vesa_drv] virtual void Framebuffer::Session_component::toggle(Genode::D ataspace_capability): Framebuffer::Session_component::toggle: _fb_cap not accept ed. --
I wonder who might that be. Maybe a relict of mine or does the framework some kind of 'reflection'?
(3) However, all that did not really bother, and for a start, everything seemed to work nice, but when my setting grew bigger, some ipcs didn't work properly.
One of my two clients (the 2nd) caught an 'Ipc_error' when performing its second request. The first request performed fine. I thought that maybe the ipc could not be performed because the Session_component was serving another ipc at that moment. So I've ignored the exception and i kept retrying the ipc call. Without any success, and from here on I can't imagine what happens.
Does anyone have an idea what I've missed?
Kind regards
Sven -- Sven Fülster
Hi Sven,
On Wednesday, 26. August 2009 18:04:34 Sven Fülster wrote:
Hi all,
during my work, I've written a vesa driver which is capable of serving multiple clients (using 'virtual' framebuffers, derived from your vesa_drv).
I always thought, that my 'Session_component' is created once when the first client opens a connection to my service, so that init calls 'create_session' in my 'Root_component' just once and never again.
That was obviously wrong. I've found out, that 'create_session' is called once for each client - not on creation of the connection but on the first ipc call the client invokes.
That's not fully true. It is called every time on creation of a new session for a client (normally, not by the client itself). It is not called when the client invokes the first IPC. When the client calls the server the first time the session has to be created already. The outcome of the session creation process is, that the client owns a capability referencing the service. Without the capability the client cannot call the server. So a client cannot call that service before 'create_session' was invoked.
For my setting, this behaviour is not useful, so I've manually taken care of being a singleton by allocating my 'Root_component' only once in 'create-session' and handing out that single object subsequently on each following session creation.
I've noticed some strange behaviour with that setting.
(1) I saw that something complained about my singleton
[init -> vesa_drv] void Genode::Avl_node<NT>::insert(Genode::Avl_node<NT>*) [wit h NT = Genode::Object_poolGenode::Server_object::Entry]: Inserting element 26f c twice into avl tree! --
When in Genode a node (process) requests a 'session' from its parent, that request - dependent on your policy - is further propagated until it reaches a node, which owns a capability to the 'root interface' of the requested service (base/include/root/root.h). By calling the 'session(arg)' method of the 'root-interface' one creates a new 'Session_component' - each time it is called. After the creation process, that component is inserted in an AVL tree. That tree is used to locate 'Session_component's of the different clients. So if you implement 'create_session' in a manner, that it will return the same object twice, you will get the first exception, when the same object is inserted twice in that tree.
But that did not seem to matter, so I ignored it...
Well, you shouldn't ignore that :-)
(2) I had to identify my clients somehow to maintain state information. So I hand over a 'shared secret' in each ipc call. That's simply the dataspace cap the client got from me.
That's exactly the task of a Session_component - to maintain state information of the client. The client's capability in fact references a Session_component and the state encapsulated in that object, so you don't need an extra identifier or secret. To your problem: you have to share another common object between different 'Session_component's of the corresponding clients, if you have to share something. For example: you might have some framebuffer object representing the physical framebuffer, that is globally known or referenced by each Session_component, the Session_component stores all information needed to handle the clients virtual framebuffer. That way you don't have to abuse the framework's abstractions either ;-).
Potentially, the documentation of sessions, session components and root components isn't suffcient. We will review our documents for that.
Also, the vesa framebuffer driver isn't a good starting point for you, as you want to serve different clients and the driver is limited to one client. You may have a look at the nitlog service for example, which multiplexes logging messages of different clients into one buffer (demo/src/server/nitlog).
Moreover, you may have a look at the architectural description of the session interface once again: http://genode.org/documentation/architecture/interfaces And the introductorily server/client tutorial: http://genode.org/documentation/developer-resources/client_server_tutorial
I noticed that something else than my clients called one of my methods during startup with an invalid secret.
-- [init -> vesa_drv] virtual void Framebuffer::Session_component::toggle(Genode::D ataspace_capability): Framebuffer::Session_component::toggle 4b383d61h [init -> vesa_drv] virtual void Framebuffer::Session_component::toggle(Genode::D ataspace_capability): Framebuffer::Session_component::toggle: _fb_cap not accept ed. --
I wonder who might that be. Maybe a relict of mine or does the framework some kind of 'reflection'?
It might be a relict or a client, that only speaks the original framebuffer protocol and not your extended one - but its just guessing.
(3) However, all that did not really bother, and for a start, everything seemed to work nice, but when my setting grew bigger, some ipcs didn't work properly.
One of my two clients (the 2nd) caught an 'Ipc_error' when performing its second request. The first request performed fine. I thought that maybe the ipc could not be performed because the Session_component was serving another ipc at that moment. So I've ignored the exception and i kept retrying the ipc call. Without any success, and from here on I can't imagine what happens.
You don't get an IPC error because another client uses the same Session_component, in that case you would simply block.
It's hard to give you a hint in the right direction without knowing the code, but I recommend, that you first fix the problems above, before searching for a solution to this one. Potentially it is caused by the former problems.
I have a question too. I wonder why do you write a component to multiplex the framebuffer, when you have nitpicker, which does this for you. Do you need some extraordinary functionality?
kind regards Stefan
PS: In general, it is a good idea to abstract things like multiplexing a driver within a new component and not to build it inside the vesa driver. By that you could use different framebuffer drivers for your virtual framebuffer, but potentially now its better not to open up to many building sites at the same time.
Does anyone have an idea what I've missed?
Kind regards
Sven
Sven Fülster
--- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan,
I have a question too. I wonder why do you write a component to multiplex the framebuffer, when you have nitpicker, which does this for you. Do you need some extraordinary functionality?
Yes, why didn't I consider nitpicker? I had not used it before and thought it would be too big for my plan...
Upon your hint, I have indeed opened that new building site and now my applications use nitpicker (I've read your paper about it...), because in fact it's exactly what i needed. At the moment, I'm playing with some graphical enhancements nitpicker allows for - such as windows, etc. :)
Thank you for the hints. Indeed, my own approach was a nice introduction into some internals of your framework.
Kind regards
Sven -- Sven Fülster