Using a real harddisk
Sebastian Sumpf
Sebastian.Sumpf at ...1...
Wed May 14 16:43:34 CEST 2014
Hi,
On 05/11/2014 09:17 PM, w_schmidt at ...181... wrote:
> Hi,
>
> I have a few stupid questions regarding the use of real hardware.
>
> I want to access files from a harddisk using a block cache. The partition
> would be /sda7/test in ext2 format. (and for example file1.txt) I tried to
> start nova directly form disc - works with mouse & keyboard now.
>
> With the example file in /dde_rump/run/rump2_ext2.run I do not see how I
> could use an existing harddrive.
>
> I understand the script as following:
> First a bin/ext2.raw file is created and then a file system is created with
> mke2fs.
> The ram_blk points to the file. How can it be pointed to a drive like /sda7
> instead?
> Is it even possible now to use a real harddsic file system? Or would i need
> to create a file like in the example (only permanent) and put the file
> together with the rest of the program in grub so that nova boots with
> providing the file?
Assuming your machine is an x86 derivative and supports AHCI, you could
use Genode's AHCI driver, which will expose a block sesssion (see:
os/src/drivers/ahci/README). In order to access the partitions on your
disk, a server called 'part_blk' is required (see:
os/src/server/part_blk/README for its configuration). So, you would have
to remove 'ram_blk' and add 'ahci_drv', 'part_blk' (routed to
'ahci_drv'), adjust 'rump_fs' to be routed to 'part_blk' partition 7.
> The second part of the question is according to things needed to execute the
> rump_ext2.run script.
> As i tried to create prepare PKG=libs i needded to install: subversion,
> flex, bison
> Are they missing in the build tools or is no specific version needed?
There should be no specific version needed.
> The next questions are regarding the block cache. If I try to execute the
> script, i get a Tests finished succesful, but a lot of messages like
> [init] Cannot respond to resource request - out of memory
> [init ->blk_cache] could not expand dataspace pool
The cache is greedy and tries to get as much memory from its parent as
possible. By the way, rump has a build in block cache also.
> To be honest I do not see in the configuration at all how it should work.
> If i understand it correctly a qemu with 64 MB RAM is started, and
> test-blk-cli is given a quantum of 2GB?
That only means that test-blk-cli will get all the remaining RAM.
> The test-blk-cli gets a route for Block to the Cache and the Cache to
> Blksrv. What I don't understand with the sizes is: Why has the client the
> largest size and not the smallest? If it has the largest it could just read
> complete files and keep it in memory?
I think the author just did not want to calculate the amount of RAM
required for the test program.
> I tried the following: I copied the rump_ext2.run script and created an
> entry with start name blk_cache and put a route from rump_fs to Block child
> blk_cache, from blk_Cache as child ram_blk.
> I started it and then see that the script said that blk_cache was not found,
> but it run succesful.
>
> My question is therefore: if a service name is in multiple clients and one
> client in a route is not available, is automatically another client with
> same service name used? Or have i forgot something?
It worked because 'blk_cache' wasn't started, and therefore only one
block session was announced ('ram_blk'). If the block cache had been
started and 'test-libc_vfs' didn't have a route entry for the block
service, init would complain about ambiguous routes to service "Block".
> Afterwards i included in the set build_components section server/blk_cache
> (after drivers/timer) and entered as boot module blk_cache. Is this the
> correct way of including the cache?
Yes it is!
> The result of this was that I got an output with: [init -> blk_cache]
> updating quota for SIGNAL session, an init rump_fs upgrade quota output and
> afterwards and a long stop after init->rump_fs Backend::Backend() Backend
> blk_size 512 Afterwards the script stopped with Test execution timeout.
> with error 254) What could I have done wrong with the configuration of this
> scenario?
Ok, I will try to reproduce the behavior.
Regards,
Sebastian
--
Sebastian Sumpf
Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
More information about the users
mailing list