Hi, As a toy exercise I'm trying to write a very simple custom pager which simply allocates memory from RAM to handle page faults. When I try to handle the fault I get a region conflict exception. Can someone tell me why?? I think I am missing something fundamental here.
Thanks Daniel
----
#include <base/printf.h> #include <base/sleep.h> #include <base/rpc_server.h> #include <cap_session/connection.h> #include <dataspace/client.h> #include <rom_session/connection.h> #include <rm_session/connection.h> #include <ram_session/connection.h> #include <root/component.h> #include <util/avl_string.h> #include <util/misc_math.h> #include <assert.h>
#define PAGE_SIZE 4096
using namespace Genode;
namespace Test {
/** * Example pager class * * @return */ class Pager : public Thread<8192> { private:
Signal_receiver _receiver;
public:
Signal_receiver *signal_receiver() { return &_receiver; }
void entry(); };
/** * Example backing store class - this one uses physical memory * */ class Physical_backing_store : public Signal_context { private: Rm_connection _rm; size_t _size;
public: // ctor Physical_backing_store(size_t size): _rm(0,_size) { _size = size; assert(_size > 0); }
virtual ~Physical_backing_store() { }
/** * Page fault handler. * */ void handle_fault() { Rm_session::State state = _rm.state();
printf("Test::Physical_backing_store:: rm session state is %s, pf_addr=0x%lx\n", state.type == Rm_session::READ_FAULT ? "READ_FAULT" : state.type == Rm_session::WRITE_FAULT ? "WRITE_FAULT" : state.type == Rm_session::EXEC_FAULT ? "EXEC_FAULT" : "READY", state.addr);
if (state.type == Rm_session::READY) return;
try {
Dataspace_capability _ds; try { _ds = env()->ram_session()->alloc(PAGE_SIZE); } catch(...) { PERR("Actual page allocation failed.\n"); }
_rm.attach_at(_ds, state.addr & ~(PAGE_SIZE - 1));
PDBG("attached data space OK!\n"); } catch (Genode::Rm_session::Region_conflict) { PERR("Region conflict - this should not happen\n"); } catch (Genode::Rm_session::Out_of_metadata) { PERR("Out of meta data!\n"); } catch(...) { PERR("Something else caused attach to fail in fault handler.\n"); }
return; }
Rm_connection *rm() { return &_rm; } Dataspace_capability ds() { return _rm.dataspace(); }
void connect_pager(Pager& _pager) { /* connect pager signal receiver to the fault handler */ _rm.fault_handler(_pager.signal_receiver()->manage(this)); }
};
/** * Entry point for pager thread. * */ void Pager::entry() { printf("Pager thread started OK.\n");
while (true) { try { Signal signal = _receiver.wait_for_signal();
for (int i = 0; i < signal.num(); i++) { static_cast<Physical_backing_store *>(signal.context())->handle_fault(); } } catch (...) { PDBG("unexpected error while waiting for signal"); } } }
}
int main() { Genode::printf("Test-pager example. ;-))\n");
Test::Pager _pager;
/* start pager thread */ _pager.start();
/* create dataspace etc. */ try {
enum { MANAGED_DS_SIZE = 64*1024*1024 }; Test::Physical_backing_store bs(MANAGED_DS_SIZE);
/* connect pager to fault handler */ bs.connect_pager(_pager);
/* attach to dataspace */ char * addr = (char*) env()->rm_session()->attach(bs.ds()); assert(addr);
/* trigger read fault */ printf("!!!Triggering fault...\n"); printf("%c%c%c\n",addr[0],addr[1],addr[2]);
printf("Fault handled OK.\n");
} catch(...) { PERR("Something failed ????.\n"); }
printf("test-pager completed.\n"); Genode::sleep_forever(); return 0; }
Hi Daniel,
we tried to reproduce your problem, and thereby had to recognize that in the current Fiasco.OC/Genode version there is something fundamentally broken with the rm-fault handling. We will further investigate the issue, and provide a fix as soon as possible.
Thank you for reporting the issue!
Regards Stefan
On 09.07.2011 01:45, Daniel Waddington wrote:
Hi, As a toy exercise I'm trying to write a very simple custom pager which simply allocates memory from RAM to handle page faults. When I try to handle the fault I get a region conflict exception. Can someone tell me why?? I think I am missing something fundamental here.
Thanks Daniel
#include <base/printf.h> #include <base/sleep.h> #include <base/rpc_server.h> #include <cap_session/connection.h> #include <dataspace/client.h> #include <rom_session/connection.h> #include <rm_session/connection.h> #include <ram_session/connection.h> #include <root/component.h> #include <util/avl_string.h> #include <util/misc_math.h> #include <assert.h>
#define PAGE_SIZE 4096
using namespace Genode;
namespace Test {
/**
- Example pager class
- @return
*/ class Pager : public Thread<8192> { private:
Signal_receiver _receiver;
public:
Signal_receiver *signal_receiver() { return &_receiver; } void entry(); };
/**
- Example backing store class - this one uses physical memory
*/ class Physical_backing_store : public Signal_context { private: Rm_connection _rm; size_t _size;
public: // ctor Physical_backing_store(size_t size): _rm(0,_size) { _size = size; assert(_size > 0); }
virtual ~Physical_backing_store() { } /** * Page fault handler. * */ void handle_fault() { Rm_session::State state = _rm.state(); printf("Test::Physical_backing_store:: rm session state is %s,
pf_addr=0x%lx\n", state.type == Rm_session::READ_FAULT ? "READ_FAULT" : state.type == Rm_session::WRITE_FAULT ? "WRITE_FAULT" : state.type == Rm_session::EXEC_FAULT ? "EXEC_FAULT" : "READY", state.addr);
if (state.type == Rm_session::READY) return; try { Dataspace_capability _ds; try { _ds = env()->ram_session()->alloc(PAGE_SIZE); } catch(...) { PERR("Actual page allocation failed.\n"); } _rm.attach_at(_ds, state.addr & ~(PAGE_SIZE - 1)); PDBG("attached data space OK!\n"); } catch (Genode::Rm_session::Region_conflict) { PERR("Region conflict - this should not happen\n"); } catch (Genode::Rm_session::Out_of_metadata) { PERR("Out of meta data!\n"); } catch(...) { PERR("Something else caused attach to fail in fault handler.\n"); } return; } Rm_connection *rm() { return &_rm; } Dataspace_capability ds() { return _rm.dataspace(); } void connect_pager(Pager& _pager) { /* connect pager signal receiver to the fault handler */ _rm.fault_handler(_pager.signal_receiver()->manage(this)); }
};
/**
- Entry point for pager thread.
*/ void Pager::entry() { printf("Pager thread started OK.\n");
while (true) { try { Signal signal = _receiver.wait_for_signal(); for (int i = 0; i < signal.num(); i++) { static_cast<Physical_backing_store
*>(signal.context())->handle_fault(); } } catch (...) { PDBG("unexpected error while waiting for signal"); } } }
}
int main() { Genode::printf("Test-pager example. ;-))\n");
Test::Pager _pager;
/* start pager thread */ _pager.start();
/* create dataspace etc. */ try {
enum { MANAGED_DS_SIZE = 64*1024*1024 }; Test::Physical_backing_store bs(MANAGED_DS_SIZE); /* connect pager to fault handler */ bs.connect_pager(_pager); /* attach to dataspace */ char * addr = (char*) env()->rm_session()->attach(bs.ds()); assert(addr); /* trigger read fault */ printf("!!!Triggering fault...\n"); printf("%c%c%c\n",addr[0],addr[1],addr[2]); printf("Fault handled OK.\n");
} catch(...) { PERR("Something failed ????.\n"); }
printf("test-pager completed.\n"); Genode::sleep_forever(); return 0; }
All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Stefan, Not to rush you, but any idea how long a fix will be? (Just so I can re-prioritize if need be)
Thanks, Daniel
On 07/11/2011 06:46 AM, Stefan Kalkowski wrote:
Hi Daniel,
we tried to reproduce your problem, and thereby had to recognize that in the current Fiasco.OC/Genode version there is something fundamentally broken with the rm-fault handling. We will further investigate the issue, and provide a fix as soon as possible.
Thank you for reporting the issue!
Regards Stefan
On 09.07.2011 01:45, Daniel Waddington wrote:
Hi, As a toy exercise I'm trying to write a very simple custom pager which simply allocates memory from RAM to handle page faults. When I try to handle the fault I get a region conflict exception. Can someone tell me why?? I think I am missing something fundamental here.
Thanks Daniel
#include<base/printf.h> #include<base/sleep.h> #include<base/rpc_server.h> #include<cap_session/connection.h> #include<dataspace/client.h> #include<rom_session/connection.h> #include<rm_session/connection.h> #include<ram_session/connection.h> #include<root/component.h> #include<util/avl_string.h> #include<util/misc_math.h> #include<assert.h>
#define PAGE_SIZE 4096
using namespace Genode;
namespace Test {
/** * Example pager class * * @return */ class Pager : public Thread<8192> { private:
Signal_receiver _receiver;
public:
Signal_receiver *signal_receiver() { return&_receiver; } void entry(); };
/** * Example backing store class - this one uses physical memory * */ class Physical_backing_store : public Signal_context { private: Rm_connection _rm; size_t _size;
public: // ctor Physical_backing_store(size_t size): _rm(0,_size) { _size = size; assert(_size> 0); }
virtual ~Physical_backing_store() { } /** * Page fault handler. * */ void handle_fault() { Rm_session::State state = _rm.state(); printf("Test::Physical_backing_store:: rm session state is %s,
pf_addr=0x%lx\n", state.type == Rm_session::READ_FAULT ? "READ_FAULT" : state.type == Rm_session::WRITE_FAULT ? "WRITE_FAULT" : state.type == Rm_session::EXEC_FAULT ? "EXEC_FAULT" : "READY", state.addr);
if (state.type == Rm_session::READY) return; try { Dataspace_capability _ds; try { _ds = env()->ram_session()->alloc(PAGE_SIZE); } catch(...) { PERR("Actual page allocation failed.\n"); } _rm.attach_at(_ds, state.addr& ~(PAGE_SIZE - 1)); PDBG("attached data space OK!\n"); } catch (Genode::Rm_session::Region_conflict) { PERR("Region conflict - this should not happen\n"); } catch (Genode::Rm_session::Out_of_metadata) { PERR("Out of meta data!\n"); } catch(...) { PERR("Something else caused attach to fail in fault handler.\n"); } return; } Rm_connection *rm() { return&_rm; } Dataspace_capability ds() { return _rm.dataspace(); } void connect_pager(Pager& _pager) { /* connect pager signal receiver to the fault handler */ _rm.fault_handler(_pager.signal_receiver()->manage(this)); }
};
/** * Entry point for pager thread. * */ void Pager::entry() { printf("Pager thread started OK.\n");
while (true) { try { Signal signal = _receiver.wait_for_signal(); for (int i = 0; i< signal.num(); i++) { static_cast<Physical_backing_store
*>(signal.context())->handle_fault(); } } catch (...) { PDBG("unexpected error while waiting for signal"); } } }
}
int main() { Genode::printf("Test-pager example. ;-))\n");
Test::Pager _pager;
/* start pager thread */ _pager.start();
/* create dataspace etc. */ try {
enum { MANAGED_DS_SIZE = 64*1024*1024 }; Test::Physical_backing_store bs(MANAGED_DS_SIZE); /* connect pager to fault handler */ bs.connect_pager(_pager); /* attach to dataspace */ char * addr = (char*) env()->rm_session()->attach(bs.ds()); assert(addr); /* trigger read fault */ printf("!!!Triggering fault...\n"); printf("%c%c%c\n",addr[0],addr[1],addr[2]); printf("Fault handled OK.\n");
} catch(...) { PERR("Something failed ????.\n"); }
printf("test-pager completed.\n"); Genode::sleep_forever(); return 0; }
All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Daniel,
I just want to let you know that we identified the problem yesterday. Please expect the fix to be available at our SVN as of mid next week.
Best regards Norman
On 07/14/2011 06:31 PM, Daniel Waddington wrote:
Hi Stefan, Not to rush you, but any idea how long a fix will be? (Just so I can re-prioritize if need be)
Thanks, Daniel
On 07/11/2011 06:46 AM, Stefan Kalkowski wrote:
Hi Daniel,
we tried to reproduce your problem, and thereby had to recognize that in the current Fiasco.OC/Genode version there is something fundamentally broken with the rm-fault handling. We will further investigate the issue, and provide a fix as soon as possible.
Thank you for reporting the issue!
Regards Stefan
Hi Daniel,
the support for managed dataspaces on Fiasco.OC has now become available with SVN revision 161. As reference of how this feature can be used, you can find a ready-to-use run script at 'base/run/rm_fault.run'. To execute the test case, just issue 'make run/rm_fault' from your build directory.
We have also used a slightly improved version of your test case for validating the page-fault handling code. The first improvement is the initialization order of the '_size' and '_rm' members of 'Physical_backing_store'. In the original version, an undefined value was passed as argument to the '_rm' constructor. The second improvement is the format string used to read values from the managed dataspace. Because the fresh allocated backing-store dataspace is initialized with zeros, %c won't show anything meaningful. Hence, the format string was changed to print integer values instead. Please find the new version (including a run script) attached.
One known limitation with managed dataspaces on Fiasco.OC is worth mentioning: Detaching a managed dataspace won't flush the corresponding region of the virtual address space of the client. This is a known (but small) issue resulting in the message "unmapping of managed dataspaces not yet supported" when the test case exits. Please give us a hint when this limitation becomes a problem for you.
Cheers Norman
On 07/16/2011 11:03 AM, Norman Feske wrote:
Hi Daniel,
I just want to let you know that we identified the problem yesterday. Please expect the fix to be available at our SVN as of mid next week.
Best regards Norman
On 07/14/2011 06:31 PM, Daniel Waddington wrote:
Hi Stefan, Not to rush you, but any idea how long a fix will be? (Just so I can re-prioritize if need be)
Thanks, Daniel
Hi Norman, Do you mean that I can't "detach" a virtual mapping from a dataspace?
If so, this could in fact be a problem for us since we need to be able to perform re-mapping. If it does not mean this, then how do I detach? ;-)
Thanks, Daniel
On 07/18/2011 03:10 AM, Norman Feske wrote:
Hi Daniel,
the support for managed dataspaces on Fiasco.OC has now become available with SVN revision 161. As reference of how this feature can be used, you can find a ready-to-use run script at 'base/run/rm_fault.run'. To execute the test case, just issue 'make run/rm_fault' from your build directory.
We have also used a slightly improved version of your test case for validating the page-fault handling code. The first improvement is the initialization order of the '_size' and '_rm' members of 'Physical_backing_store'. In the original version, an undefined value was passed as argument to the '_rm' constructor. The second improvement is the format string used to read values from the managed dataspace. Because the fresh allocated backing-store dataspace is initialized with zeros, %c won't show anything meaningful. Hence, the format string was changed to print integer values instead. Please find the new version (including a run script) attached.
One known limitation with managed dataspaces on Fiasco.OC is worth mentioning: Detaching a managed dataspace won't flush the corresponding region of the virtual address space of the client. This is a known (but small) issue resulting in the message "unmapping of managed dataspaces not yet supported" when the test case exits. Please give us a hint when this limitation becomes a problem for you.
Cheers Norman
On 07/16/2011 11:03 AM, Norman Feske wrote:
Hi Daniel,
I just want to let you know that we identified the problem yesterday. Please expect the fix to be available at our SVN as of mid next week.
Best regards Norman
On 07/14/2011 06:31 PM, Daniel Waddington wrote:
Hi Stefan, Not to rush you, but any idea how long a fix will be? (Just so I can re-prioritize if need be)
Thanks, Daniel
AppSumo Presents a FREE Video for the SourceForge Community by Eric Ries, the creator of the Lean Startup Methodology on "Lean Startup Secrets Revealed." This video shows you how to validate your ideas, optimize your ideas and identify your business strategy. http://p.sf.net/sfu/appsumosfdev2dev
Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Daniel,
it is possible to revoke mappings from a managed dataspace, needed for implementing on-depand paging including the eviction of mappings. If you take a close look at the 'rm_fault' example, you see that it is precisely doing this procedure by subsequently attaching and revoking a mapping to/from the managed dataspace accessed by the child.
But unmapping a whole managed dataspace will currently leave traces in the virtual memory of the client. For our current uses where we use a managed dataspace for the whole lifetime of a process, this is not a problem. But this changes if you would want to use managed dataspace more dynamically.
That said, the fix is not hard and we will resolve this issue with the upcoming release.
Cheers Norman
On 07/20/2011 12:44 AM, Daniel Waddington wrote:
Hi Norman, Do you mean that I can't "detach" a virtual mapping from a dataspace?
If so, this could in fact be a problem for us since we need to be able to perform re-mapping. If it does not mean this, then how do I detach? ;-)
Thanks, Daniel