Hello Jilong,
Can anyone give me a hint on how to get rid of the "Could not allocate metadata" error msg? Or, how can I effectively increase the quota for that?
your question too generic to give you a definite answer. Your complete LOG output and a way to reproduce the problem (i.e., a run script) would help.
However, let me try to guide you for finding the problem yourself. The message "Could not allocate metadata" is printed by core's RAM service when there is not enough session quota left for holding meta data about allocated/free memory blocks. Hence, your attempt to upgrade the session quota using 'parent()->upgrade()' was actually spot-on. However, since you report that the upgrade did not solve your problem, I suppose that the session that produced the error message is not the one represented by the 'env()->ram_session_cap()' argument you passed to the upgrade function.
Unfortunately, the message printed by core leaves little insight about the client that triggered the problem. To reveal a bit more information, you may find the patch "Debug: print session labels if quota exceeds" in the following branch useful:
https://github.com/nfeske/genode/commits/quota_msg
Just cherry-pick the commit and adjust it as needed, e.g., you may enhance the error message with additional status information of the RAM session. Right now, it just appends the session label as additional information, which may already guide you to the right spot.
Another approach to find the troubling session it to let core block once the error condition occurs. I.e., just temporarily insert an infinite loop after the message is printed. This way, core will get stuck at that point and, consequently, not reply the RPC call that triggered the problem. Now, when the condition occurs, you can enter the Fiasco.OC kernel debugger and look up the thread that is currently doing an IPC call to core's entrypoint thread. Use the 'lp' command to list the threads and look at their respective states. Knowing the troubling thread will probably be helpful.
Good luck! If you manage to encircle the problem, I would greatly appreciate if you report the issue (and possibly your fix).
Cheers Norman