Memory write tracing/logging of an application / Watchpoints in Genode/Fiasco.OC

Stark, Josef j.stark at ...256...
Thu Jan 25 18:28:23 CET 2018


Hi,

> As you should be the parent of the traced component, you can intercept
> the CPU session and remember all thread capabilities created by the
> component. On a fault you iterate through all threads to select those
> that are currently in a page fault. A good example for this is the GDB
> monitor [1]. 
Ok, so far, so good, I could successfully get the threads within my
target process and indeed access the correct instruction pointer.
But the next problem is that Thread_state::unresolved_page_fault
for me always 0 is, for all threads (I provoked a pagefault by detaching
the dataspace), so I can't filter out the faulter(s). Do you have a clue?
As long as I don't attach a dataspace at the corresponding address,
it should indeed be an unresolved page fault, or am I wrong here?

> Now you want to select from the remaining threads those
> that are in a fault on your specific address. This can't be done with
> the current Genode API, but an easy way to achieve it would be to expand
> the Cpu_state [2] to deliver also the value of the ARM Data Fault
> Address Register or DFAR when calling Cpu_thread_client::state (make
> sure to update the dfar member in [4] as is done in [3]).
Good. I'm using Fiasco.OC though, so I'll have to figure out how to
do this there. Writing a custom kernel function making the assembler call
copied from Genode just froze the VM, but I already asked the Fiasco.OC
people for help.

> Then, you would have to continue execution of
> the thread. Unfortunately, this is normally done automatically by
> Genodes Core when it sees a new mapping that matches the fault address.
> You don't want to do such a mapping, but there is no explicit "resume
> this faulter". So, you have might add such an RPC call [5] and its back
> end [6] to the RM interface. This shouldn't be much invasive.
At least this seems to work.

> If there are further questions please do not hesitate to ask ;)
That's very nice of you, thanks a lot for your help!


Cheers,
Josef



More information about the users mailing list