Hi, i will try explain what i want and what i did and then what i am facing, What i am trying to do :
- using only one LWIP (tcp-ip) stack - the communication with lwip will be by using RPC calls. - the system should be multi-threading and serve many process in the same time
What I did,
- I created the client - server for the RPC which invoke the lwip methods. I will use the server only . - I created the libc_rpc_lwip plugin which is similar to libc_lwip . in libc_rpc_lwip i have an aobject of the Connection to call the methods. - Since i want only one stack in the i did not initialize the lwip from the constrcutre of the plugin that is mean : Plugin::Plugin() { PDBG("using the RPC lwIP libc plugin\n"); //rpcclient.r_lwip_tcpip_init(); this is commented } and i made the initiation in the RPC server and i used static IP for now ( I don't want to handle the libc_lwip_nic_dhcp right now) and this is the main method in the RPC server
int main(void) { Cap_connection cap;
static Sliced_heap sliced_heap(env()->ram_session(), env()->rm_session());
PDBG("lwip init has been started."); lwip_tcpip_init(); enum { BUF_SIZE = Nic::Packet_allocator::DEFAULT_PACKET_SIZE * 128 }; lwip_nic_init(inet_addr("10.0.2.18"), inet_addr("255.255.255.0"), inet_addr("10.0.2.1"), BUF_SIZE, BUF_SIZE); PDBG("gotting ip is done.");
enum { STACK_SIZE = 7096 }; static Rpc_entrypoint ep(&cap, STACK_SIZE, "rpclwip_ep");
static rpclwipsession::Root_component rpclwip_root(&ep, &sliced_heap); env()->parent()->announce(ep.manage(&rpclwip_root));
/* We are done with this and only act upon client requests now. */ sleep_forever();
return 0; }
- for testing i am using simple echo server and make it configure it to use the libc_rpc_lwip insread of libc_lwip and i was succeeded .
What I am Facing:
- when i tried to run more then one echo server ( in different port) ,the lwip is not working will and what i found that when the tow echo server try to accept connection the first one will block the other. and it means that lwip is not multi threading any more. Is the RPC prevent the lwip multi-threading or I am doing some thing wrong . I maنe my problem easy to be understood, if not just ask me to clarify.
Thank you in advance
Hi Mohammad,
the steps you described look plausible to me.
What I am Facing:
- when i tried to run more then one echo server ( in different port)
,the lwip is not working will and what i found that when the tow echo server try to accept connection the first one will block the other. and it means that lwip is not multi threading any more. Is the RPC prevent the lwip multi-threading or I am doing some thing wrong . I maنe my problem easy to be understood, if not just ask me to clarify.
According to your server-side code, there is only a single entrypoint thread that serves the requests for all clients. While one client issues a blocking call (i.e., accept), the entrypoint thread cannot process RPC requests for other clients. All RPC calls are processed in a serialized way.
There are two possible ways to solve this issue. You can either (1) remove blocking RPC calls from the RPC interface or (2) make the server multi-threaded (using one entrypoint per session).
If you go for (1), how to implement blocking functions like accept, then? The solution is the use of asynchronous notifications. Instead of issuing a blocking RPC call, the client would register a signal handler at the server. It would then issue a non-blocking version of RPC function (which immediately returns) and subsequently wait for a signal from the server. In contrast to the blocking RPC call, in this scenario the blocking happens at the client side only. So the server can keep handling other incoming RPC requests. Of course, on the server side, this asynchronous mode of operation must be implemented accordingly, e.g., by using 'select' in main program. So the server cannot remain a mere wrapper around the lwip API.
If you go for (2), you have to implement the 'Root_component' class by yourself. Each 'Session_component' would contain a 'Rpc_entrypoint' that manages the single 'Session_component' only. Effectively, each time a client creates a connection, a dedicated entrypoint thread gets created. Unfortunately, the current version of Genode does not feature an example for a server that uses one entrypoint per session. However, once (like two years ago), there existed a version of the timer driver for NOVA that employed this scheme. You may take this code as inspiration:
https://github.com/genodelabs/genode/tree/b54bdea2aae245b2d8f53794c1c1b9b2da...
Which would be the preferable way to go? I would definitely go for (1) because this solution would be much cleaner and more light-weight (i.e., you'd not need a thread per client). However, I am afraid that it is more elaborative than (2) because you will need to model your server as a state machine. The normal BSD socket interface uses blocking calls for accept. So you'd need to replace them by an asynchronous alternative, possibly diving right into the inner realms of the lwIP stack to accomplish that.
Btw, if you look at recent implementations of services on Genode, you will find that all new servers have embraced the asynchronous design. E.g., take a look at how 'Timer_session::msleep()' works as an example.
Still, exploring (2) might be a worthwhile experiment and a useful interim solution.
Cheers Norman
*Hi Norman,* *I am little confused . Do i need to change all the liwp socket calls wich has blocking property like (accept , recvfrom,etc ) with non blocking calls, then I need to use asynchronous notifications. instead of using Block RPC calls , *
*thanks*
2014-05-28 19:30 GMT+01:00 Norman Feske <norman.feske@...1...>:
Hi Mohammad,
the steps you described look plausible to me.
What I am Facing:
- when i tried to run more then one echo server ( in different port)
,the lwip is not working will and what i found that when the tow echo server try to accept connection the first one will block the other. and it means that lwip is not multi threading any more. Is the RPC prevent the lwip multi-threading or I am doing some thing wrong . I maنe my problem easy to be understood, if not just ask me to
clarify.
According to your server-side code, there is only a single entrypoint thread that serves the requests for all clients. While one client issues a blocking call (i.e., accept), the entrypoint thread cannot process RPC requests for other clients. All RPC calls are processed in a serialized way.
There are two possible ways to solve this issue. You can either (1) remove blocking RPC calls from the RPC interface or (2) make the server multi-threaded (using one entrypoint per session).
If you go for (1), how to implement blocking functions like accept, then? The solution is the use of asynchronous notifications. Instead of issuing a blocking RPC call, the client would register a signal handler at the server. It would then issue a non-blocking version of RPC function (which immediately returns) and subsequently wait for a signal from the server. In contrast to the blocking RPC call, in this scenario the blocking happens at the client side only. So the server can keep handling other incoming RPC requests. Of course, on the server side, this asynchronous mode of operation must be implemented accordingly, e.g., by using 'select' in main program. So the server cannot remain a mere wrapper around the lwip API.
If you go for (2), you have to implement the 'Root_component' class by yourself. Each 'Session_component' would contain a 'Rpc_entrypoint' that manages the single 'Session_component' only. Effectively, each time a client creates a connection, a dedicated entrypoint thread gets created. Unfortunately, the current version of Genode does not feature an example for a server that uses one entrypoint per session. However, once (like two years ago), there existed a version of the timer driver for NOVA that employed this scheme. You may take this code as inspiration:
https://github.com/genodelabs/genode/tree/b54bdea2aae245b2d8f53794c1c1b9b2da...
Which would be the preferable way to go? I would definitely go for (1) because this solution would be much cleaner and more light-weight (i.e., you'd not need a thread per client). However, I am afraid that it is more elaborative than (2) because you will need to model your server as a state machine. The normal BSD socket interface uses blocking calls for accept. So you'd need to replace them by an asynchronous alternative, possibly diving right into the inner realms of the lwIP stack to accomplish that.
Btw, if you look at recent implementations of services on Genode, you will find that all new servers have embraced the asynchronous design. E.g., take a look at how 'Timer_session::msleep()' works as an example.
Still, exploring (2) might be a worthwhile experiment and a useful interim solution.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Time is money. Stop wasting it! Get your web API in 5 minutes. www.restlet.com/download http://p.sf.net/sfu/restlet _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi,
*I am little confused . Do i need to change all the liwp socket calls wich has blocking property like (accept , recvfrom,etc ) with non blocking calls, then I need to use asynchronous notifications. instead of using Block RPC calls , *
you would still use RPC calls for the those operations but they won't block on the server side. Let me illustrate the idea with a simple example using a 'read' function provided by a terminal-like server. The client wants to wait for user input by calling 'read'. From the client's perspective, the call of 'read' should block until new user input is available. When returning, the function should return the new user input.
With your original design, the client called the server's 'read' RPC function. If the server had no input, it would block. While blocking, however, no other client could be served.
My proposal to overcome this situation combines RPC calls with asynchronous notifications: We add a new function to the RPC interface that allows the client to register a signal handler. The function takes a 'Signal_context_capability' as argument. This signal context capability is created by the client using the signal API (see 'base/signal.h', and the 'Signal_receiver::manage()' function in particular).
After creating a session, the first thing the client does is to install the signal handler. Then, it uses the server by calling 'read'. If the server has input available, it will return the data right away. Both server and client are happy. If the server has no input available, the server would respond with a return code that tells the client that it's now time to block. E.g., for the read function, the server could just return a 0. If the client detects this condition, it would block using 'Signal_receiver::wait_for_signal()'. The server, however, is not blocking and can server other clients.
Now, if new input arrives at the server, the server would buffer the data and submit a signal to the signal handler that was registered by the client (using 'Signal_transmitter::submit()'). This signal will unblock the 'wait_for_signal' function at the client. Now, the client will again call 'read' to receive the new data.
Throughout Genode, you can find several examples that employ this scheme. I already mentioned the timer-session interface. But you may also have a look at the terminal-session.
Best regards Norman
Hi Norman,
thank you for your replay, Acctually i am thinking in using the second option by making the server multi-threaded (using one entrypoint per session). but in this case i am facing the problem that the RPC has its own schedule and the LwIP also has it is own one too . So in one case i will have that the RPC waiting in one point and the LWIP waiting in different place so isthere any way to try to combine the both wait points in one .
2014-06-15 19:33 GMT+01:00 Norman Feske <norman.feske@...1...>:
Hi,
*I am little confused . Do i need to change all the liwp socket calls wich has blocking property like (accept , recvfrom,etc ) with non blocking calls, then I need to use asynchronous notifications. instead of using Block RPC calls , *
you would still use RPC calls for the those operations but they won't block on the server side. Let me illustrate the idea with a simple example using a 'read' function provided by a terminal-like server. The client wants to wait for user input by calling 'read'. From the client's perspective, the call of 'read' should block until new user input is available. When returning, the function should return the new user input.
With your original design, the client called the server's 'read' RPC function. If the server had no input, it would block. While blocking, however, no other client could be served.
My proposal to overcome this situation combines RPC calls with asynchronous notifications: We add a new function to the RPC interface that allows the client to register a signal handler. The function takes a 'Signal_context_capability' as argument. This signal context capability is created by the client using the signal API (see 'base/signal.h', and the 'Signal_receiver::manage()' function in particular).
After creating a session, the first thing the client does is to install the signal handler. Then, it uses the server by calling 'read'. If the server has input available, it will return the data right away. Both server and client are happy. If the server has no input available, the server would respond with a return code that tells the client that it's now time to block. E.g., for the read function, the server could just return a 0. If the client detects this condition, it would block using 'Signal_receiver::wait_for_signal()'. The server, however, is not blocking and can server other clients.
Now, if new input arrives at the server, the server would buffer the data and submit a signal to the signal handler that was registered by the client (using 'Signal_transmitter::submit()'). This signal will unblock the 'wait_for_signal' function at the client. Now, the client will again call 'read' to receive the new data.
Throughout Genode, you can find several examples that employ this scheme. I already mentioned the timer-session interface. But you may also have a look at the terminal-session.
Best regards Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Mohammad,
thank you for your replay, Acctually i am thinking in using the second option by making the server multi-threaded (using one entrypoint per session). but in this case i am facing the problem that the RPC has its own schedule and the LwIP also has it is own one too . So in one case i will have that the RPC waiting in one point and the LWIP waiting in different place so isthere any way to try to combine the both wait points in one .
I am afraid that this information is too vague to give to meaningful advice.
To find out at where your program gets stuck, I have two recommendations: Simplify your scenario and move the scenario to Linux to inspect it with GDB.
As for simplifying the scenario, I recommend you to take the RPC interface out for now. Integrate both clients into a single program, which also links against lwIP. Create a thread for each client and let the thread call the respective "main" routine. When running the resulting program, you should see the same behavior as now because two threads are interacting with the lwIP stack.
Moving the scenario to Linux will enable you to closely inspect what is going on. Each Genode thread will be a Linux thread. You can see all the threads using 'ps -eLf' and attach to each of them via 'gdb -p <thread ID>'. This way, you get a good overview of the situation when it hangs. For running your networking test on Linux, you can use the NIC driver at 'os/src/drivers/nic/linux', which uses a tap device. For more information about setting up networking using a tap device, please refer to the following descriptions:
http://www.genode.org/documentation/release-notes/10.02#NIC_driver_for_Linux http://genode.org/documentation/release-notes/10.05#Arora_web_browser
You may also take the run script at 'libports/run/lwip_lx.run' as reference. It starts a little HTTP server. On startup, it prints its IP address, which you can use as URL in your browser.
Best regards Norman
hi Norman ,
As for simplifying the scenario, I recommend you to take the RPC interface out for now. Integrate both clients into a single program, which also links against lwIP. Create a thread for each client and let the thread call the respective "main" routine. When running the resulting program, you should see the same behavior as now because two threads are interacting with the lwIP stack.
what i want is to let the rpc server able to server two client( two echo client for example) work in different ports. as you advice i remove the RPC interface and did the next , - create server method which take the port number as parameter to create echo server and listen to the given port void *server (void *Sport) { .... } - in the main method i great two threads and supply each one with different port, - create the run file and them run the test what i found that the both server ( threads) connect correctly and both of them block at accept and that is the right action while with using RPC one of them will block at the accept and the other will wait .
from the client side i connect to the both server by telent each of them and type message to them and it work smoothly. so , I think without using the RPC we dont have the blocking problem or may be my test was wrong . i will attach the .run ,server.c and .mk files .
best,
2014-06-26 13:04 GMT+01:00 Norman Feske <norman.feske@...1...>:
Hi Mohammad,
thank you for your replay, Acctually i am thinking in using the second option by making the server multi-threaded (using one entrypoint per session). but in this case i am facing the problem that the RPC has its own schedule and the LwIP also has it is own one too . So in one case i will have that the RPC waiting in one point and the LWIP waiting in different place so isthere any way to try to combine the both wait points in one .
I am afraid that this information is too vague to give to meaningful advice.
To find out at where your program gets stuck, I have two recommendations: Simplify your scenario and move the scenario to Linux to inspect it with GDB.
As for simplifying the scenario, I recommend you to take the RPC interface out for now. Integrate both clients into a single program, which also links against lwIP. Create a thread for each client and let the thread call the respective "main" routine. When running the resulting program, you should see the same behavior as now because two threads are interacting with the lwIP stack.
Moving the scenario to Linux will enable you to closely inspect what is going on. Each Genode thread will be a Linux thread. You can see all the threads using 'ps -eLf' and attach to each of them via 'gdb -p <thread ID>'. This way, you get a good overview of the situation when it hangs. For running your networking test on Linux, you can use the NIC driver at 'os/src/drivers/nic/linux', which uses a tap device. For more information about setting up networking using a tap device, please refer to the following descriptions:
http://www.genode.org/documentation/release-notes/10.02#NIC_driver_for_Linux http://genode.org/documentation/release-notes/10.05#Arora_web_browser
You may also take the run script at 'libports/run/lwip_lx.run' as reference. It starts a little HTTP server. On startup, it prints its IP address, which you can use as URL in your browser.
Best regards Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Mohammad,
from the client side i connect to the both server by telent each of them and type message to them and it work smoothly. so , I think without using the RPC we dont have the blocking problem or may be my test was wrong . i will attach the .run ,server.c and .mk files .
your test is exactly what I had in mind for simplifying the scenario. Since the test works when two threads are using lwIP directly, we know that your issue is not related to the way lwIP works. Your test pinpoints the problem to the RPC interface. It seems that you have missed a step when turning the RPC interface multi-threaded. Please make sure that
* Your 'Lwip::Session_component' has an 'Rpc_entrypoint' as a member variable. So each time, a session is created, a dedicated entrypoint get created, too.
* Your 'Lwip::Session_component' is managed by the session's own entrypoint, not the entrpoint that serves the root interface. Note that the default 'Root_component' provided by Genode's 'root/component.h' does not what you want. You cannot use the default implementation but implement the 'Root_component' yourself. Please take a close look at the timer variant [1] I referenced in my email from May 28.
[1] https://github.com/genodelabs/genode/tree/b54bdea2aae245b2d8f53794c1c1b9b2da...
To see that each lwIP session is executed by a different thread, you may add the following debug output to one of the lwIP RPC functions (e.g., at the beginning of 'accept').
PDBG("thread_base at %p", Genode::Thread_base::myself());
The 'myself' function returns a different pointer for each thread that calls the function. You will see two messages, one for each session. If you see two different pointer values, you know that both sessions are dispatched by different threads - this is what we want. I guess that you will see the same value twice.
If the problem persists, would you consider to make branch publicly available (e.g., on GitHub) so that I could have a look?
On another note, have you had success with inspecting the scenario using GDB on Linux?
Good luck! Norman
Hi Norman,
in order to simplify the implementation of multi-threaded/multi-entrypoint servers, I made an attempt to extract the generic part from the multi-threaded timer implementation you referenced.
I therefore created a Root_component_multi and a Session_component_multi from which a server implementation can easily inherit. As far as I know, it is working in Mohammad's case.
I put this on github so that anyone else can reuse the code [1].
[1] https://github.com/ValiValpas/genode/commit/b8afa38dc98a28da525442022f7a0149...
Best Johannes
On Wed, 02 Jul 2014 14:37:04 +0200 Norman Feske <norman.feske@...1...> wrote:
Hi Mohammad,
from the client side i connect to the both server by telent each of them and type message to them and it work smoothly. so , I think without using the RPC we dont have the blocking problem or may be my test was wrong . i will attach the .run ,server.c and .mk files .
your test is exactly what I had in mind for simplifying the scenario. Since the test works when two threads are using lwIP directly, we know that your issue is not related to the way lwIP works. Your test pinpoints the problem to the RPC interface. It seems that you have missed a step when turning the RPC interface multi-threaded. Please make sure that
Your 'Lwip::Session_component' has an 'Rpc_entrypoint' as a member variable. So each time, a session is created, a dedicated entrypoint get created, too.
Your 'Lwip::Session_component' is managed by the session's own entrypoint, not the entrpoint that serves the root interface. Note that the default 'Root_component' provided by Genode's 'root/component.h' does not what you want. You cannot use the
default implementation but implement the 'Root_component' yourself. Please take a close look at the timer variant [1] I referenced in my email from May 28.
[1] https://github.com/genodelabs/genode/tree/b54bdea2aae245b2d8f53794c1c1b9b2da...
To see that each lwIP session is executed by a different thread, you may add the following debug output to one of the lwIP RPC functions (e.g., at the beginning of 'accept').
PDBG("thread_base at %p", Genode::Thread_base::myself());
The 'myself' function returns a different pointer for each thread that calls the function. You will see two messages, one for each session. If you see two different pointer values, you know that both sessions are dispatched by different threads - this is what we want. I guess that you will see the same value twice.
If the problem persists, would you consider to make branch publicly available (e.g., on GitHub) so that I could have a look?
On another note, have you had success with inspecting the scenario using GDB on Linux?
Good luck! Norman
Hi Norman, Thank you for you last email . In the same time i would like to thank Johannes for the great help also. the RPC_LWIP work fine now with out blocking, I did not finish the whole test process yet but till now it is working smoothly .
i will keep all in touch in the case of facing troubles in the test phases .
Best Regards,
2014-07-07 12:44 GMT+01:00 Johannes Schlatow <schlatow@...238...>:
Hi Norman,
in order to simplify the implementation of multi-threaded/multi-entrypoint servers, I made an attempt to extract the generic part from the multi-threaded timer implementation you referenced.
I therefore created a Root_component_multi and a Session_component_multi from which a server implementation can easily inherit. As far as I know, it is working in Mohammad's case.
I put this on github so that anyone else can reuse the code [1].
[1]
https://github.com/ValiValpas/genode/commit/b8afa38dc98a28da525442022f7a0149...
Best Johannes
On Wed, 02 Jul 2014 14:37:04 +0200 Norman Feske <norman.feske@...1...> wrote:
Hi Mohammad,
from the client side i connect to the both server by telent each of them and type message to them and it work smoothly. so , I think without using the RPC we dont have the blocking problem or may be my test was wrong . i will attach the .run ,server.c and .mk files .
your test is exactly what I had in mind for simplifying the scenario. Since the test works when two threads are using lwIP directly, we know that your issue is not related to the way lwIP works. Your test pinpoints the problem to the RPC interface. It seems that you have missed a step when turning the RPC interface multi-threaded. Please make sure that
Your 'Lwip::Session_component' has an 'Rpc_entrypoint' as a member variable. So each time, a session is created, a dedicated entrypoint get created, too.
Your 'Lwip::Session_component' is managed by the session's own entrypoint, not the entrpoint that serves the root interface. Note that the default 'Root_component' provided by Genode's 'root/component.h' does not what you want. You cannot use the
default implementation but implement the 'Root_component' yourself. Please take a close look at the timer variant [1] I referenced in my email from May 28.
[1]
https://github.com/genodelabs/genode/tree/b54bdea2aae245b2d8f53794c1c1b9b2da...
To see that each lwIP session is executed by a different thread, you may add the following debug output to one of the lwIP RPC functions (e.g., at the beginning of 'accept').
PDBG("thread_base at %p", Genode::Thread_base::myself());
The 'myself' function returns a different pointer for each thread that calls the function. You will see two messages, one for each session. If you see two different pointer values, you know that both sessions are dispatched by different threads - this is what we want. I guess that you will see the same value twice.
If the problem persists, would you consider to make branch publicly available (e.g., on GitHub) so that I could have a look?
On another note, have you had success with inspecting the scenario using GDB on Linux?
Good luck! Norman
Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main