RPC with LWIP

Norman Feske norman.feske at ...1...
Wed May 28 20:30:52 CEST 2014


Hi Mohammad,

the steps you described look plausible to me.

> What I am Facing:
> 
>     - when i tried to run more then one echo server ( in different port)
> ,the lwip is
>       not working will and what i found that when the tow echo server
> try to accept
>       connection the first one will block the other. and it means that
> lwip is not multi       threading any more.
>    Is the RPC prevent the lwip multi-threading or I am doing some thing
> wrong . 
>    I maنe my problem easy to be understood, if not just ask me to clarify.

According to your server-side code, there is only a single entrypoint
thread that serves the requests for all clients. While one client issues
a blocking call (i.e., accept), the entrypoint thread cannot process RPC
requests for other clients. All RPC calls are processed in a serialized way.

There are two possible ways to solve this issue. You can either (1)
remove blocking RPC calls from the RPC interface or (2) make the server
multi-threaded (using one entrypoint per session).

If you go for (1), how to implement blocking functions like accept,
then? The solution is the use of asynchronous notifications. Instead of
issuing a blocking RPC call, the client would register a signal handler
at the server. It would then issue a non-blocking version of RPC
function (which immediately returns) and subsequently wait for a signal
from the server. In contrast to the blocking RPC call, in this scenario
the blocking happens at the client side only. So the server can keep
handling other incoming RPC requests. Of course, on the server side,
this asynchronous mode of operation must be implemented accordingly,
e.g., by using 'select' in main program. So the server cannot remain a
mere wrapper around the lwip API.

If you go for (2), you have to implement the 'Root_component' class by
yourself. Each 'Session_component' would contain a 'Rpc_entrypoint' that
manages the single 'Session_component' only. Effectively, each time a
client creates a connection, a dedicated entrypoint thread gets created.
Unfortunately, the current version of Genode does not feature an example
for a server that uses one entrypoint per session. However, once (like
two years ago), there existed a version of the timer driver for NOVA
that employed this scheme. You may take this code as inspiration:


https://github.com/genodelabs/genode/tree/b54bdea2aae245b2d8f53794c1c1b9b2da371592/os/src/drivers/timer/nova

Which would be the preferable way to go? I would definitely go for (1)
because this solution would be much cleaner and more light-weight (i.e.,
you'd not need a thread per client). However, I am afraid that it is
more elaborative than (2) because you will need to model your server as
a state machine. The normal BSD socket interface uses blocking calls for
accept. So you'd need to replace them by an asynchronous alternative,
possibly diving right into the inner realms of the lwIP stack to
accomplish that.

Btw, if you look at recent implementations of services on Genode, you
will find that all new servers have embraced the asynchronous design.
E.g., take a look at how 'Timer_session::msleep()' works as an example.

Still, exploring (2) might be a worthwhile experiment and a useful
interim solution.

Cheers
Norman

-- 
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth




More information about the users mailing list