Hi,
i try to open a file.
I used the following code:
//malloc char * buffer = (char*) Genode::env()->heap()->alloc(sizeof(char)*lSize); FILE* pFile = fopen(fileName, "rb"); size_t result = fread (buffer,1,lSize,pFile); fclose (pFile); //free (buffer); env()->heap()->free(buffer, 0);
The code seems to work if the file is quite small.
I try thre same with a file of size 128 MB. I have given rump_fs a quite large quota of 520M
[Quota exceeded! amount=69632, size=8192, consumed=65536[0m [Could not allocate metadata[0m [init -> rump_fs] [upgrading quota donation for Env::RAM (8192 bytes)[0m [Quota exceeded! amount=77824, size=8192, consumed=73728[0m [Could not allocate metadata[0m [init -> rump_fs] [upgrading quota donation for Env::RAM (8192 bytes)[0m [init -> rump_fs] [Slab-backend exhausted!
And from this point onwards only Slab-backend exhausted messages are shown.
Is there some max size for ram quotas?
Best regards, Wolfgang
Hi,
further information:
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
Best regards Wolfgang From: w_schmidt@...181... Sent: Sunday, July 13, 2014 12:54 PM To: Genode OS Framework Mailing List Subject: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hi,
i try to open a file.
I used the following code:
//malloc char * buffer = (char*) Genode::env()->heap()->alloc(sizeof(char)*lSize); FILE* pFile = fopen(fileName, "rb"); size_t result = fread (buffer,1,lSize,pFile); fclose (pFile); //free (buffer); env()->heap()->free(buffer, 0);
The code seems to work if the file is quite small.
I try thre same with a file of size 128 MB. I have given rump_fs a quite large quota of 520M
[Quota exceeded! amount=69632, size=8192, consumed=65536[0m [Could not allocate metadata[0m [init -> rump_fs] [upgrading quota donation for Env::RAM (8192 bytes)[0m [Quota exceeded! amount=77824, size=8192, consumed=73728[0m [Could not allocate metadata[0m [init -> rump_fs] [upgrading quota donation for Env::RAM (8192 bytes)[0m [init -> rump_fs] [Slab-backend exhausted!
And from this point onwards only Slab-backend exhausted messages are shown.
Is there some max size for ram quotas?
Best regards, Wolfgang
-------------------------------------------------------------------------------- ------------------------------------------------------------------------------
-------------------------------------------------------------------------------- _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Wolfgang,
On 08/11/2014 04:12 PM, w_schmidt@...181... wrote:
Hi,
further information:
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at 'include/util/allocator_fap.h" in the dde_rump repository, there should only be a PERR message with the slab warning left. Otherwise, our current "Rump kernels" version is not up-to-date and they have fixed some serious memory issues there (especially with the paging daemon). If you have a current Genode version, then there is something wrong with the memory accounting, which leads Rump to think it has got more memory than the Genode quota actually suggests.
Sebastian
Hi,
this seems to be happen only if rump_fs has a larger quota â the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Best regards, wOlfgang
/** * \brief Fast allocator for porting * \author Sebastian Sumpf * \date 2013-06-12 */
/* * Copyright (C) 2013-2014 Genode Labs GmbH * * This file is part of the Genode OS framework, which is distributed * under the terms of the GNU General Public License version 2. */
#ifndef _INCLUDE__UTIL__ALLOCATOR_FAP_H_ #define _INCLUDE__UTIL__ALLOCATOR_FAP_H_
#include <base/allocator_avl.h> #include <dataspace/client.h> #include <rm_session/connection.h>
namespace Allocator { template <unsigned VM_SIZE, typename POLICY> class Backend_alloc; template <unsigned VM_SIZE, typename POLICY> class Fap; }
namespace Allocator {
using namespace Genode;
struct Default_allocator_policy { static int block() { return 0; } static void unblock(int) { } };
template <typename POLICY> struct Policy_guard { int val; Policy_guard() { val = POLICY::block(); } ~Policy_guard() { POLICY::unblock(val); } };
/** * Back-end allocator for Genode's slab allocator */ template <unsigned VM_SIZE, typename POLICY = Default_allocator_policy> class Backend_alloc : public Genode::Allocator, public Genode::Rm_connection { private:
enum { BLOCK_SIZE = 1024 * 1024, /* 1 MB */ ELEMENTS = VM_SIZE / BLOCK_SIZE, /* MAX number of dataspaces in VM */ };
typedef Genode::addr_t addr_t; typedef Genode::Ram_dataspace_capability Ram_dataspace_capability; typedef Genode::Allocator_avl Allocator_avl;
addr_t _base; /* virt. base address */ Cache_attribute _cached; /* non-/cached RAM */ Ram_dataspace_capability _ds_cap[ELEMENTS]; /* dataspaces to put in VM */ addr_t _ds_phys[ELEMENTS]; /* physical bases of dataspaces */ int _index = 0; /* current index in ds_cap */ Allocator_avl _range; /* manage allocations */ bool _quota_exceeded = false;
bool _alloc_block() { if (_quota_exceeded) return false;
if (_index == ELEMENTS) { PERR("Slab-backend exhausted!"); return false; }
Policy_guard<POLICY> guard;
try { _ds_cap[_index] = Genode::env()->ram_session()->alloc(BLOCK_SIZE, _cached); /* attach at index * BLOCK_SIZE */ Rm_connection::attach_at(_ds_cap[_index], _index * BLOCK_SIZE, BLOCK_SIZE, 0); /* lookup phys. address */ _ds_phys[_index] = Genode::Dataspace_client(_ds_cap[_index]).phys_addr(); } catch (Genode::Ram_session::Quota_exceeded) { PWRN("Backend allocator exhausted"); _quota_exceeded = true; return false; } catch (Genode::Rm_session::Attach_failed) { PWRN("Backend VM region exhausted"); _quota_exceeded = true; return false; }
/* return base + offset in VM area */ addr_t block_base = _base + (_index * BLOCK_SIZE); ++_index;
_range.add_range(block_base, BLOCK_SIZE); return true; }
public:
Backend_alloc(Cache_attribute cached) : Rm_connection(0, VM_SIZE), _cached(cached), _range(Genode::env()->heap()) { /* reserver attach us, anywere */ _base = Genode::env()->rm_session()->attach(dataspace()); }
/** * Allocate */ bool alloc(size_t size, void **out_addr) { bool done = _range.alloc(size, out_addr);
if (done) return done;
done = _alloc_block(); if (!done) return false;
return _range.alloc(size, out_addr); }
void *alloc_aligned(size_t size, int align = 0) { void *addr;
if (!_range.alloc_aligned(size, &addr, align).is_error()) return addr;
if (!_alloc_block()) return 0;
if (_range.alloc_aligned(size, &addr, align).is_error()) { PERR("Backend allocator: Unable to allocate memory (size: %zu align: %d:)", size, align); return 0; }
return addr; }
void free(void *addr, size_t size) { _range.free(addr, size); } size_t overhead(size_t size) { return 0; } bool need_size_for_free() const override { return false; }
/** * Return phys address for given virtual addr. */ addr_t phys_addr(addr_t addr) { if (addr < _base || addr >= (_base + VM_SIZE)) return ~0UL;
int index = (addr - _base) / BLOCK_SIZE;
/* physical base of dataspace */ addr_t phys = _ds_phys[index];
if (!phys) return ~0UL;
/* add offset */ phys += (addr - _base - (index * BLOCK_SIZE)); return phys; }
bool inside(addr_t addr) const { return (addr >= _base) && (addr < (_base + VM_SIZE)); } };
/** * Interface */ template <unsigned VM_SIZE, typename POLICY = Default_allocator_policy> class Fap { private:
typedef Allocator::Backend_alloc<VM_SIZE, POLICY> Backend_alloc;
Backend_alloc _back_allocator;
public:
Fap(bool cached) : _back_allocator(cached ? CACHED : UNCACHED) { }
void *alloc(size_t size, int align = 0) { return _back_allocator.alloc_aligned(size, align); }
void free(void *addr, size_t size) { _back_allocator.free(addr, size); }
addr_t phys_addr(void *addr) { return _back_allocator.phys_addr((addr_t)addr); } }; } /* namespace Allocator */
#endif /* _INCLUDE__UTIL__ALLOCATOR_FAP_H_ */
Hi Wolfgang,
On 08/18/2014 08:09 PM, w_schmidt@...181... wrote:
Hi,
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Yes, that is the right one. We most likely got a problem with memory accounting in this case. Regarding your block cache question: The block cache has not really been tested up until now, so there may still be bugs. If you could provide a branch (e.g., on GitHub) or a run script, I might be able to have a look at it. On the other hand, Stefan will be back next week and, since wrote the thing, he might be able to clear things up.
Cheers,
Sebastian
Hi,
did Stefan had time to look into the issue with the block cache?
Best regards, Wolfgang
-----Ursprüngliche Nachricht----- From: Sebastian Sumpf Sent: Thursday, August 21, 2014 8:40 PM To: Genode OS Framework Mailing List Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hi Wolfgang,
On 08/18/2014 08:09 PM, w_schmidt@...181... wrote:
Hi,
this seems to be happen only if rump_fs has a larger quota â the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Yes, that is the right one. We most likely got a problem with memory accounting in this case. Regarding your block cache question: The block cache has not really been tested up until now, so there may still be bugs. If you could provide a branch (e.g., on GitHub) or a run script, I might be able to have a look at it. On the other hand, Stefan will be back next week and, since wrote the thing, he might be able to clear things up.
Cheers,
Sebastian
Hi,
On 09/02/2014 09:11 PM, w_schmidt@...181... wrote:
Hi,
did Stefan had time to look into the issue with the block cache?
I'm afraid no, I didn't had the time. To reproduce your experienced problems easily, it would be nice to have a complete and working run-script, as well as the whole output of the broken, and of the intact scenario. Moreover, if you've changed, or added any code, a topic branch or at least a patch would be nice.
Now, I've built my own script outgoing from the "rump_ext2.run" script (attachment), and try to look into the issue.
Regards Stefan
Best regards, Wolfgang
-----Ursprüngliche Nachricht----- From: Sebastian Sumpf Sent: Thursday, August 21, 2014 8:40 PM To: Genode OS Framework Mailing List Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hi Wolfgang,
On 08/18/2014 08:09 PM, w_schmidt@...181... wrote:
Hi,
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Yes, that is the right one. We most likely got a problem with memory accounting in this case. Regarding your block cache question: The block cache has not really been tested up until now, so there may still be bugs. If you could provide a branch (e.g., on GitHub) or a run script, I might be able to have a look at it. On the other hand, Stefan will be back next week and, since wrote the thing, he might be able to clear things up.
Cheers,
Sebastian
Hello Wolfgang,
I've examined the problem, in fact there are to bugs in the blk_cache that prevented the rump server from working. Please try out the following two patches:
https://github.com/skalk/genode/commit/edc225daf73ddaa1d8326b1a4efcff789cee9...
https://github.com/skalk/genode/commit/7eee84f76450deeeaecbf7e565a87d9899b36...
or use directly my topic branch for testing it:
https://github.com/skalk/genode/tree/issue%231249
Nevertheless, I've experienced problems when doing the rump_fs server, and corresponding VFS example application. Rump might leave the file system in an inconsistent state so that you can't run the test twice on the same filesystem. Maybe just a missing 'sync' in the client application? But I'm not familiar with the inner workings of the rump filesystem server, nor the libc tests.
Regards Stefan
On 09/03/2014 12:08 PM, Stefan Kalkowski wrote:
Hi,
On 09/02/2014 09:11 PM, w_schmidt@...181... wrote:
Hi,
did Stefan had time to look into the issue with the block cache?
I'm afraid no, I didn't had the time. To reproduce your experienced problems easily, it would be nice to have a complete and working run-script, as well as the whole output of the broken, and of the intact scenario. Moreover, if you've changed, or added any code, a topic branch or at least a patch would be nice.
Now, I've built my own script outgoing from the "rump_ext2.run" script (attachment), and try to look into the issue.
Regards Stefan
Best regards, Wolfgang
-----Ursprüngliche Nachricht----- From: Sebastian Sumpf Sent: Thursday, August 21, 2014 8:40 PM To: Genode OS Framework Mailing List Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hi Wolfgang,
On 08/18/2014 08:09 PM, w_schmidt@...181... wrote:
Hi,
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Yes, that is the right one. We most likely got a problem with memory accounting in this case. Regarding your block cache question: The block cache has not really been tested up until now, so there may still be bugs. If you could provide a branch (e.g., on GitHub) or a run script, I might be able to have a look at it. On the other hand, Stefan will be back next week and, since wrote the thing, he might be able to clear things up.
Cheers,
Sebastian
Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hello Stefan,
yes, thanx, your demo project looks much like the one i posted before.
If i tried your patch with a "real" ext2 partition. What i have done was just "reading" files, like
FILE* file = fopen(fileName, "rb"); fread(buffer, 1, size, file); fclose(file);
I had the following issues:
- without blk_cache rumpfs could read files. restart the machine, read - no problem doing it multiple times - with blk_cache rumpfs could read files. restart - and file system will be corrupted. FS could be fixed with fsck, but will get corrupt after next time rumpfs together with blk_cache is used on partition.
So somehow a similiar result. the currious thing is that there seems to be no problem with fs (except a rather slow read speed) when blk_cache is not used.
Best regards, Wolfgang
So no writes should occur. (except timestamps maybe)
-----Ursprüngliche Nachricht----- From: Stefan Kalkowski Sent: Wednesday, September 3, 2014 3:14 PM To: genode-main@lists.sourceforge.net Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hello Wolfgang,
I've examined the problem, in fact there are to bugs in the blk_cache that prevented the rump server from working. Please try out the following two patches:
https://github.com/skalk/genode/commit/edc225daf73ddaa1d8326b1a4efcff789cee9...
https://github.com/skalk/genode/commit/7eee84f76450deeeaecbf7e565a87d9899b36...
or use directly my topic branch for testing it:
https://github.com/skalk/genode/tree/issue%231249
Nevertheless, I've experienced problems when doing the rump_fs server, and corresponding VFS example application. Rump might leave the file system in an inconsistent state so that you can't run the test twice on the same filesystem. Maybe just a missing 'sync' in the client application? But I'm not familiar with the inner workings of the rump filesystem server, nor the libc tests.
Regards Stefan
On 09/03/2014 12:08 PM, Stefan Kalkowski wrote:
Hi,
On 09/02/2014 09:11 PM, w_schmidt@...181... wrote:
Hi,
did Stefan had time to look into the issue with the block cache?
I'm afraid no, I didn't had the time. To reproduce your experienced problems easily, it would be nice to have a complete and working run-script, as well as the whole output of the broken, and of the intact scenario. Moreover, if you've changed, or added any code, a topic branch or at least a patch would be nice.
Now, I've built my own script outgoing from the "rump_ext2.run" script (attachment), and try to look into the issue.
Regards Stefan
Best regards, Wolfgang
-----Ursprüngliche Nachricht----- From: Sebastian Sumpf Sent: Thursday, August 21, 2014 8:40 PM To: Genode OS Framework Mailing List Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hi Wolfgang,
On 08/18/2014 08:09 PM, w_schmidt@...181... wrote:
Hi,
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Yes, that is the right one. We most likely got a problem with memory accounting in this case. Regarding your block cache question: The block cache has not really been tested up until now, so there may still be bugs. If you could provide a branch (e.g., on GitHub) or a run script, I might be able to have a look at it. On the other hand, Stefan will be back next week and, since wrote the thing, he might be able to clear things up.
Cheers,
Sebastian
Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hello Wolfgang,
On 09/04/2014 08:33 PM, w_schmidt@...181... wrote:
Hello Stefan,
yes, thanx, your demo project looks much like the one i posted before.
If i tried your patch with a "real" ext2 partition. What i have done was just "reading" files, like
FILE* file = fopen(fileName, "rb"); fread(buffer, 1, size, file); fclose(file);
I had the following issues:
- without blk_cache rumpfs could read files. restart the machine, read - no
problem doing it multiple times
- with blk_cache rumpfs could read files. restart - and file system will be
corrupted. FS could be fixed with fsck, but will get corrupt after next time rumpfs together with blk_cache is used on partition.
Well, I've tested the scenario on a "real" ext2 partition too, and observed the corruption of the filesystem with and without the blk_cache. Therefore, I assume it isn't related to the cache, but to the rump fs server, or it's usage.
AFAIK, the rump fs server periodically synchronizes with the block device, that's why it sometimes might get corrupted, and sometimes not depending on the timing (usage of blk_cache will influence the timing). But again, I'm not familiar with the rump fs server.
So somehow a similiar result. the currious thing is that there seems to be no problem with fs (except a rather slow read speed) when blk_cache is not used.
Best regards, Wolfgang
So no writes should occur. (except timestamps maybe)
Again, if you want me, or other people from the mailing list, to help identifying problems, it would be nice to not post just a snippet of code, or a snippet of a configuration file, but a full working example, e.g.: a branch on github, or a complete patch together with the information on which revision it applies. That lowers the barrier to reproduce your observed results.
Regards Stefan
-----Ursprüngliche Nachricht----- From: Stefan Kalkowski Sent: Wednesday, September 3, 2014 3:14 PM To: genode-main@lists.sourceforge.net Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hello Wolfgang,
I've examined the problem, in fact there are to bugs in the blk_cache that prevented the rump server from working. Please try out the following two patches:
https://github.com/skalk/genode/commit/edc225daf73ddaa1d8326b1a4efcff789cee9...
https://github.com/skalk/genode/commit/7eee84f76450deeeaecbf7e565a87d9899b36...
or use directly my topic branch for testing it:
https://github.com/skalk/genode/tree/issue%231249
Nevertheless, I've experienced problems when doing the rump_fs server, and corresponding VFS example application. Rump might leave the file system in an inconsistent state so that you can't run the test twice on the same filesystem. Maybe just a missing 'sync' in the client application? But I'm not familiar with the inner workings of the rump filesystem server, nor the libc tests.
Regards Stefan
On 09/03/2014 12:08 PM, Stefan Kalkowski wrote:
Hi,
On 09/02/2014 09:11 PM, w_schmidt@...181... wrote:
Hi,
did Stefan had time to look into the issue with the block cache?
I'm afraid no, I didn't had the time. To reproduce your experienced problems easily, it would be nice to have a complete and working run-script, as well as the whole output of the broken, and of the intact scenario. Moreover, if you've changed, or added any code, a topic branch or at least a patch would be nice.
Now, I've built my own script outgoing from the "rump_ext2.run" script (attachment), and try to look into the issue.
Regards Stefan
Best regards, Wolfgang
-----Ursprüngliche Nachricht----- From: Sebastian Sumpf Sent: Thursday, August 21, 2014 8:40 PM To: Genode OS Framework Mailing List Subject: Re: Malloc / free ansd rump_fs ->Slab-backend exhausted
Hi Wolfgang,
On 08/18/2014 08:09 PM, w_schmidt@...181... wrote:
Hi,
this seems to be happen only if rump_fs has a larger quota – the file can be larger if rump_fs has a smaller quota. (memory tests of the system have shown no errors so RAM should be okay)
this is indeed very strange. I hope you have a Genode version where there is no Slab-Backend allocator, have a look at
'>include/util/allocator_fap.h" in the dde_rump repository, there should
only be a PERR message with the slab warning left.
The file looks like below,
Yes, that is the right one. We most likely got a problem with memory accounting in this case. Regarding your block cache question: The block cache has not really been tested up until now, so there may still be bugs. If you could provide a branch (e.g., on GitHub) or a run script, I might be able to have a look at it. On the other hand, Stefan will be back next week and, since wrote the thing, he might be able to clear things up.
Cheers,
Sebastian
Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main