Malloc / free ansd rump_fs ->Slab-backend exhausted

Stefan Kalkowski stefan.kalkowski at ...1...
Fri Sep 5 08:43:08 CEST 2014


Hello Wolfgang,

On 09/04/2014 08:33 PM, w_schmidt at ...181... wrote:
> Hello Stefan,
> 
> yes, thanx, your demo project looks much like the one i posted before.
> 
> If i tried your patch with a "real" ext2 partition.
> What i have done was just "reading" files, like
> 
> FILE* file = fopen(fileName, "rb");
> fread(buffer, 1, size, file);
> fclose(file);
> 
> I had the following issues:
> 
> - without blk_cache rumpfs could read files. restart the machine, read - no 
> problem doing it multiple times
> - with blk_cache rumpfs could read files. restart - and file system will be 
> corrupted. FS could be fixed with fsck, but will get corrupt after next time 
> rumpfs together with blk_cache is used on partition.

Well, I've tested the scenario on a "real" ext2 partition too, and
observed the corruption of the filesystem with and without the
blk_cache. Therefore, I assume it isn't related to the cache, but to the
rump fs server, or it's usage.

AFAIK, the rump fs server periodically synchronizes with the block
device, that's why it sometimes might get corrupted, and sometimes not
depending on the timing (usage of blk_cache will influence the timing).
But again, I'm not familiar with the rump fs server.

> 
> So somehow a similiar result. the currious thing is that there seems to be 
> no problem with fs (except a rather slow read speed) when blk_cache is not 
> used.
> 
> Best regards,
> Wolfgang
> 
> So no writes should occur. (except timestamps maybe)

Again, if you want me, or other people from the mailing list, to help
identifying problems, it would be nice to not post just a snippet of
code, or a snippet of a configuration file, but a full working example,
e.g.: a branch on github, or a complete patch together with the
information on which revision it applies. That lowers the barrier to
reproduce your observed results.

Regards Stefan

> 
> 
> -----Ursprüngliche Nachricht----- 
> From: Stefan Kalkowski
> Sent: Wednesday, September 3, 2014 3:14 PM
> To: genode-main at lists.sourceforge.net
> Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
> 
> Hello Wolfgang,
> 
> I've examined the problem, in fact there are to bugs in the blk_cache
> that prevented the rump server from working. Please try out the
> following two patches:
> 
> 
> https://github.com/skalk/genode/commit/edc225daf73ddaa1d8326b1a4efcff789cee948d.diff
> 
> https://github.com/skalk/genode/commit/7eee84f76450deeeaecbf7e565a87d9899b36213.diff
> 
> or use directly my topic branch for testing it:
> 
>   https://github.com/skalk/genode/tree/issue%231249
> 
> Nevertheless, I've experienced problems when doing the rump_fs server,
> and corresponding VFS example application. Rump might leave the file
> system in an inconsistent state so that you can't run the test twice on
> the same filesystem. Maybe just a missing 'sync' in the client
> application? But I'm not familiar with the inner workings of the rump
> filesystem server, nor the libc tests.
> 
> Regards
> Stefan
> 
> On 09/03/2014 12:08 PM, Stefan Kalkowski wrote:
>> Hi,
>>
>> On 09/02/2014 09:11 PM, w_schmidt at ...181... wrote:
>>> Hi,
>>>
>>> did Stefan had time to look into the issue with the block cache?
>>
>> I'm afraid no, I didn't had the time. To reproduce your experienced
>> problems easily, it would be nice to have a complete and working
>> run-script, as well as the whole output of the broken, and of the intact
>> scenario. Moreover, if you've changed, or added any code, a topic branch
>> or at least a patch would be nice.
>>
>> Now, I've built my own script outgoing from the "rump_ext2.run" script
>> (attachment), and try to look into the issue.
>>
>> Regards
>> Stefan
>>
>>>
>>> Best regards,
>>> Wolfgang
>>>
>>> -----Ursprüngliche Nachricht----- 
>>> From: Sebastian Sumpf
>>> Sent: Thursday, August 21, 2014 8:40 PM
>>> To: Genode OS Framework Mailing List
>>> Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
>>>
>>> Hi Wolfgang,
>>>
>>> On 08/18/2014 08:09 PM, w_schmidt at ...181... wrote:
>>>> Hi,
>>>>
>>>>>> this seems to be happen only if rump_fs has a larger quota – the file
>>>>>> can be larger if rump_fs has a smaller quota. (memory tests of the
>>>>>> system have shown no errors so RAM should be okay)
>>>>
>>>>> this is indeed very strange. I hope you have a Genode version where
>>>>> there is no Slab-Backend allocator, have a look at
>>>> '>include/util/allocator_fap.h" in the dde_rump repository, there should
>>>>> only be a PERR message with the slab warning left.
>>>>
>>>> The file looks like below,
>>>
>>> Yes, that is the right one. We most likely got a problem with memory
>>> accounting in this case. Regarding your block cache question: The block
>>> cache has not really been tested up until now, so there may still be
>>> bugs. If you could provide a branch (e.g., on GitHub) or a run script, I
>>> might be able to have a look at it. On the other hand, Stefan will be
>>> back next week and, since wrote the thing, he might be able to clear
>>> things up.
>>>
>>> Cheers,
>>>
>>> Sebastian
>>>
>>>
>>
>>
>>
>> ------------------------------------------------------------------------------
>> Slashdot TV.
>> Video for Nerds.  Stuff that matters.
>> http://tv.slashdot.org/
>>
>>
>>
>> _______________________________________________
>> genode-main mailing list
>> genode-main at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>
> 

-- 
Stefan Kalkowski
Genode Labs

http://www.genode-labs.com/ · http://genode.org/




More information about the users mailing list