Experimenting with the VFS socket plugin, I'm tripping over an exception that I'm not sure how to handle. This is demonstrated in the "netty_tcp" test, which launches two echo servers, one listening on port 80, and another on port 8080. Both servers launch and become operational on about 25% of runs (under x86_64, nova kernel, QEMU, user network); on the remaining starts, one of the servers succeeds, but the other fails like this:
#+BEGIN_EXAMPLE [init -> netty-server-8080] initialize server [init -> netty-server-80] initialize server [init -> netty-server-8080] sd=0 [init -> netty-server-8080] config: port=8080 read_write=0 nonblock=0 [init -> netty-server-8080] setsockopt: SO_REUSEADDR not yet implemented - always true [init -> netty-server-80] sd=0 [init -> netty-server-80] config: port=80 read_write=0 nonblock=1 [init -> netty-server-80] setsockopt: SO_REUSEADDR not yet implemented - always true [init -> netty-server-80] Error: plugin()->open("/socket/tcp/2/listen") failed [init -> netty-server-80] Error: _fd_for_type: listen file not accessible [init -> netty-server-8080] test in blocking mode [init -> netty-server-80] Error: Uncaught exception of type 'Socket_fs::Context::Inaccessible' [init -> netty-server-80] Warning: abort called - thread: ep [init] child "netty-server-80" exited with exit value 1 #+END_EXAMPLE
Empirically, it appears that the server that gets past "setsockopt:..." first will become operational--the other server usually will not. The order seems to be non-deterministic.
Its not clear how to catch the 'Socket_fs::Context::Inaccessible' exception in the netty server code, or even if that is an appropriate response.
Suggestions?
- Steve Harp
Hello Steve,
On Tue, Oct 03, 2017 at 02:15:59PM -0500, Steven Harp wrote:
Experimenting with the VFS socket plugin, I'm tripping over an exception that I'm not sure how to handle. This is demonstrated in the "netty_tcp" test, which launches two echo servers, one listening on port 80, and another on port 8080. Both servers launch and become operational on about 25% of runs (under x86_64, nova kernel, QEMU, user network); on the remaining starts, one of the servers succeeds, but the other fails like this:
#+BEGIN_EXAMPLE [init -> netty-server-8080] initialize server [init -> netty-server-80] initialize server [init -> netty-server-8080] sd=0 [init -> netty-server-8080] config: port=8080 read_write=0 nonblock=0 [init -> netty-server-8080] setsockopt: SO_REUSEADDR not yet implemented - always true [init -> netty-server-80] sd=0 [init -> netty-server-80] config: port=80 read_write=0 nonblock=1 [init -> netty-server-80] setsockopt: SO_REUSEADDR not yet implemented - always true [init -> netty-server-80] Error: plugin()->open("/socket/tcp/2/listen") failed [init -> netty-server-80] Error: _fd_for_type: listen file not accessible [init -> netty-server-8080] test in blocking mode [init -> netty-server-80] Error: Uncaught exception of type 'Socket_fs::Context::Inaccessible' [init -> netty-server-80] Warning: abort called - thread: ep [init] child "netty-server-80" exited with exit value 1 #+END_EXAMPLE
Empirically, it appears that the server that gets past "setsockopt:..." first will become operational--the other server usually will not. The order seems to be non-deterministic.
I'm currently working on this issue, which is documented in our issue tracker.
https://github.com/genodelabs/genode/issues/2520
Its not clear how to catch the 'Socket_fs::Context::Inaccessible' exception in the netty server code, or even if that is an appropriate response.
The exception must be catched in the libc backend code and should not be reflected to the application, i.e., netty, as the app uses a C API.
Greets