*Hello Genodians!
I tried to build Genode for Gumstix Overo platform. I added new build target foc_overo and implemented framebuffer and touchscreen drivers. I added my changes to my fork on Github https://github.com/iloskutov/genode
First question about core. Core for Fiasco.OC used LD_TEXT_ADDR = 0x140000 in base-foc/src/core/target.inc. In Overo memory locations starting at 0x80000000. I think, I need to set LD_TEXT_ADDR = 0x80140000. I added it to base-foc/src/core/arm/target.inc but it applied for all ARM targets. How can I set it for my Overo target only?
In our project we use Chestnut43 board for prototype. This board has 4.3” LCD 480x272 pixels. I get display configuration for this LCD from Linux driver. Display work, picture is drawing, colors are correct. But sometimes window drawn with distortion (normal drawing http://dl.dropbox.com/u/8558928/pic_1.jpg , distorted: http://dl.dropbox.com/u/8558928/pic_2.jpg http://dl.dropbox.com/u/8558928/pic_3.jpg ). I can’t solve this yet. Maybe I have wrong configuration for display controller.
Touchscreen based on ADS7846 chip. I wrote simple driver. How can I do calibration implementation? I see several solutions: 1. Create new input event for touchscreen. Then driver can to send raw data to Nitpicker. Nitpicker translate this data to screen coordinates use calibration constants which must be loaded from some application. 2. Calibration data must be loaded to the touchscreen driver. But how make it correct? Maybe you can to offer best solution?
Drivers implementation are simple. I don’t write drivers for GPIO and McSPI as independent part yet.
For test platform I wrote simple Qt application. I don’t understand how to work resolution dependency. Where need I write all dependencies for automatic build my target. I used qt4/run/qt4.run as base for my run target. Execution “make run/qt_test” failed with error “cannot stat `bin/ dejavusans.lib.so'”. Not all libraries was built. If I execute “make run/qt4” previously then my qt_test built without error.*
Hi Ivan,
I tried to build Genode for Gumstix Overo platform. I added new build target foc_overo and implemented framebuffer and touchscreen drivers. I added my changes to my fork on Github https://github.com/iloskutov/genode
I have already noticed your fork yesterday. That is an impressive and quite unexpected line of work! :-)
First question about core. Core for Fiasco.OC used LD_TEXT_ADDR = 0x140000 in base-foc/src/core/target.inc. In Overo memory locations starting at 0x80000000. I think, I need to set LD_TEXT_ADDR = 0x80140000. I added it to base-foc/src/core/arm/target.inc but it applied for all ARM targets. How can I set it for my Overo target only?
I have to look into this problem more closely. Intuitively, I think it would be good to have a way for specifying core's virtual address in a 'spec-overo.mk' file.
In our project we use Chestnut43 board for prototype. This board has 4.3” LCD 480x272 pixels. I get display configuration for this LCD from Linux driver. Display work, picture is drawing, colors are correct. But sometimes window drawn with distortion (normal drawing http://dl.dropbox.com/u/8558928/pic_1.jpg , distorted: http://dl.dropbox.com/u/8558928/pic_2.jpg http://dl.dropbox.com/u/8558928/pic_3.jpg ). I can’t solve this yet. Maybe I have wrong configuration for display controller.
To me this looks like a typical cache artifact. The syscall bindings of Fiasco.OC provide several functions for dealing with caches on ARM platforms (i.e., see 'cache.h'). Those functions are unused by Genode until now because we haven't experienced such artifacts with the PBXA9 platform or Qemu. Maybe you could investigate if these cache-related functions are relevant to your problem and if so, how they could be put to use in a clean way within Genode?
Touchscreen based on ADS7846 chip. I wrote simple driver. How can I do calibration implementation? I see several solutions:
- Create new input event for touchscreen. Then driver can to send raw data
to Nitpicker. Nitpicker translate this data to screen coordinates use calibration constants which must be loaded from some application. 2. Calibration data must be loaded to the touchscreen driver. But how make it correct? Maybe you can to offer best solution?
The best would be to keep the concerns separated. So the touch-screen driver should produce raw data. Otherwise, we would lose information. Nitpicker, on the other hand, expects input events that are already calibrated with screen coordinates. Adding calibration support into nitpicker would make it more complex, which I'd like to avoid.
How about introducing a dedicated input-event-calibration component? This component would sit in-between the touch screen driver and nitpicker (this can easily be done using Genode's session-routing concept). This new component would get it calibration parameters from its config file, opens an input session (which gets routed to the touch-screen driver), and, in turn, announcing an input service itself (providing the calibrated input events). Thinking a bit further, this component could be implemented entirely generic. Its configuration would be the parameters of an affine transformation in the form of a matrix. So this component could then be used for arbitrary transformation of (absolute) input events. Do you like this idea?
For test platform I wrote simple Qt application. I don’t understand how to work resolution dependency. Where need I write all dependencies for automatic build my target. I used qt4/run/qt4.run as base for my run target. Execution “make run/qt_test” failed with error “cannot stat `bin/ dejavusans.lib.so'”. Not all libraries was built. If I execute “make run/qt4” previously then my qt_test built without error.*
Apparently, your test application is much simpler that the Qt4 application used in qt4.run. It does not need 'dejavusans.lib.so'. Therefore, the build system did not build this library (dependencies at work! .-) But in your run script, you still specify the file name 'dejavusans.lib.so' in the list of boot modules that should be put into the final image. Could you try to just remove this entry?
Again, congrats for your amazing work!
Cheers Norman
Hi
I am also porting Genode to a new ARM board. Right now I am stuck with libc. It just halts with abort() as soon as it initiates. I cannot use GDB at this stage. Is there any other way of debuging this ? Adding prints to libc doesnt work since this problem is when libc init starts.
Michael Sent with Blackberry
-----Original Message----- From: Norman Feske <norman.feske@...1...> Date: Thu, 16 Feb 2012 12:09:11 To: genode-main@lists.sourceforge.net Reply-To: Genode OS Framework Mailing List genode-main@lists.sourceforge.net Subject: Re: Building Genode for Gumstix Overo platform
Hi Ivan,
I tried to build Genode for Gumstix Overo platform. I added new build target foc_overo and implemented framebuffer and touchscreen drivers. I added my changes to my fork on Github https://github.com/iloskutov/genode
I have already noticed your fork yesterday. That is an impressive and quite unexpected line of work! :-)
First question about core. Core for Fiasco.OC used LD_TEXT_ADDR = 0x140000 in base-foc/src/core/target.inc. In Overo memory locations starting at 0x80000000. I think, I need to set LD_TEXT_ADDR = 0x80140000. I added it to base-foc/src/core/arm/target.inc but it applied for all ARM targets. How can I set it for my Overo target only?
I have to look into this problem more closely. Intuitively, I think it would be good to have a way for specifying core's virtual address in a 'spec-overo.mk' file.
In our project we use Chestnut43 board for prototype. This board has 4.3” LCD 480x272 pixels. I get display configuration for this LCD from Linux driver. Display work, picture is drawing, colors are correct. But sometimes window drawn with distortion (normal drawing http://dl.dropbox.com/u/8558928/pic_1.jpg , distorted: http://dl.dropbox.com/u/8558928/pic_2.jpg http://dl.dropbox.com/u/8558928/pic_3.jpg ). I can’t solve this yet. Maybe I have wrong configuration for display controller.
To me this looks like a typical cache artifact. The syscall bindings of Fiasco.OC provide several functions for dealing with caches on ARM platforms (i.e., see 'cache.h'). Those functions are unused by Genode until now because we haven't experienced such artifacts with the PBXA9 platform or Qemu. Maybe you could investigate if these cache-related functions are relevant to your problem and if so, how they could be put to use in a clean way within Genode?
Touchscreen based on ADS7846 chip. I wrote simple driver. How can I do calibration implementation? I see several solutions:
- Create new input event for touchscreen. Then driver can to send raw data
to Nitpicker. Nitpicker translate this data to screen coordinates use calibration constants which must be loaded from some application. 2. Calibration data must be loaded to the touchscreen driver. But how make it correct? Maybe you can to offer best solution?
The best would be to keep the concerns separated. So the touch-screen driver should produce raw data. Otherwise, we would lose information. Nitpicker, on the other hand, expects input events that are already calibrated with screen coordinates. Adding calibration support into nitpicker would make it more complex, which I'd like to avoid.
How about introducing a dedicated input-event-calibration component? This component would sit in-between the touch screen driver and nitpicker (this can easily be done using Genode's session-routing concept). This new component would get it calibration parameters from its config file, opens an input session (which gets routed to the touch-screen driver), and, in turn, announcing an input service itself (providing the calibrated input events). Thinking a bit further, this component could be implemented entirely generic. Its configuration would be the parameters of an affine transformation in the form of a matrix. So this component could then be used for arbitrary transformation of (absolute) input events. Do you like this idea?
For test platform I wrote simple Qt application. I don’t understand how to work resolution dependency. Where need I write all dependencies for automatic build my target. I used qt4/run/qt4.run as base for my run target. Execution “make run/qt_test” failed with error “cannot stat `bin/ dejavusans.lib.so'”. Not all libraries was built. If I execute “make run/qt4” previously then my qt_test built without error.*
Apparently, your test application is much simpler that the Qt4 application used in qt4.run. It does not need 'dejavusans.lib.so'. Therefore, the build system did not build this library (dependencies at work! .-) But in your run script, you still specify the file name 'dejavusans.lib.so' in the list of boot modules that should be put into the final image. Could you try to just remove this entry?
Again, congrats for your amazing work!
Cheers Norman
Hi Norman,
Thanks for the fast response.
First question about core. Core for Fiasco.OC used LD_TEXT_ADDR = 0x140000
in base-foc/src/core/target.inc. In Overo memory locations starting at 0x80000000. I think, I need to set LD_TEXT_ADDR = 0x80140000. I added it
to
base-foc/src/core/arm/target.inc but it applied for all ARM targets. How can I set it for my Overo target only?
I have to look into this problem more closely. Intuitively, I think it would be good to have a way for specifying core's virtual address in a 'spec-overo.mk' file.
Ok. I fixed it.
In our project we use Chestnut43 board for prototype. This board has 4.3” LCD 480x272 pixels. I get display configuration for this LCD from Linux driver. Display work, picture is drawing, colors are correct. But
sometimes
window drawn with distortion (normal drawing http://dl.dropbox.com/u/8558928/pic_1.jpg , distorted: http://dl.dropbox.com/u/8558928/pic_2.jpg http://dl.dropbox.com/u/8558928/pic_3.jpg ). I can’t solve this yet.
Maybe
I have wrong configuration for display controller.
To me this looks like a typical cache artifact. The syscall bindings of Fiasco.OC provide several functions for dealing with caches on ARM platforms (i.e., see 'cache.h'). Those functions are unused by Genode until now because we haven't experienced such artifacts with the PBXA9 platform or Qemu. Maybe you could investigate if these cache-related functions are relevant to your problem and if so, how they could be put to use in a clean way within Genode?
I'll try to investigate this.
Touchscreen based on ADS7846 chip. I wrote simple driver. How can I do calibration implementation? I see several solutions:
- Create new input event for touchscreen. Then driver can to send raw
data
to Nitpicker. Nitpicker translate this data to screen coordinates use calibration constants which must be loaded from some application. 2. Calibration data must be loaded to the touchscreen driver. But how
make
it correct? Maybe you can to offer best solution?
The best would be to keep the concerns separated. So the touch-screen driver should produce raw data. Otherwise, we would lose information. Nitpicker, on the other hand, expects input events that are already calibrated with screen coordinates. Adding calibration support into nitpicker would make it more complex, which I'd like to avoid.
How about introducing a dedicated input-event-calibration component? This component would sit in-between the touch screen driver and nitpicker (this can easily be done using Genode's session-routing concept). This new component would get it calibration parameters from its config file, opens an input session (which gets routed to the touch-screen driver), and, in turn, announcing an input service itself (providing the calibrated input events). Thinking a bit further, this component could be implemented entirely generic. Its configuration would be the parameters of an affine transformation in the form of a matrix. So this component could then be used for arbitrary transformation of (absolute) input events. Do you like this idea?
It's good idea. I'll try to impement this component.
For test platform I wrote simple Qt application. I don’t understand how
to
work resolution dependency. Where need I write all dependencies for automatic build my target. I used qt4/run/qt4.run as base for my run target. Execution “make run/qt_test” failed with error “cannot stat `bin/ dejavusans.lib.so'”. Not all libraries was built. If I execute “make run/qt4” previously then my qt_test built without error.*
Apparently, your test application is much simpler that the Qt4 application used in qt4.run. It does not need 'dejavusans.lib.so'. Therefore, the build system did not build this library (dependencies at work! .-) But in your run script, you still specify the file name 'dejavusans.lib.so' in the list of boot modules that should be put into the final image. Could you try to just remove this entry?
Yes, all work. Sorry, I hurried to ask this question. I had to understand myself :)
Hi Michael,
On 16.02.2012 14:07, Michael Grunditz wrote:
Hi
I am also porting Genode to a new ARM board. Right now I am stuck with libc. It just halts with abort() as soon as it initiates. I cannot use GDB at this stage. Is there any other way of debuging this ? Adding prints to libc doesnt work since this problem is when libc init starts.
that sounds like you're getting an exception at that point.
If you're using the Fiasco.OC kernel together with Genode you can of course use it's included kernel-debugger, which is quiet feature-rich. You can invoke it by hand via the serial line by pressing escape, or you put a 'enter_kdebug("WAIT")' snippet at appropriate places of your code resp. libc initialization. Therefore you've to include the following before:
namespace Fiasco { #include <l4/sys/kdebug.h> }
I fear the kernel-debugger's usage isn't that self-explanatory, but you can start playing around after looking at the help-screen via '?'. Something often useful is dumping the stacktrace of a thread via 'bt'. To get meaningful symbols instead of plain addresses there is a small tool in 'base-foc/contrib/kernel/fiasco/tool/backtrace'.
Hope that helps Regards Stefan
Michael Sent with Blackberry
-----Original Message----- From: Norman Feske <norman.feske@...1...> Date: Thu, 16 Feb 2012 12:09:11 To: genode-main@lists.sourceforge.net Reply-To: Genode OS Framework Mailing List <genode-main@...98....net> Subject: Re: Building Genode for Gumstix Overo platform
Hi Ivan,
I tried to build Genode for Gumstix Overo platform. I added new build target foc_overo and implemented framebuffer and touchscreen drivers. I added my changes to my fork on Github https://github.com/iloskutov/genode
I have already noticed your fork yesterday. That is an impressive and quite unexpected line of work! :-)
First question about core. Core for Fiasco.OC used LD_TEXT_ADDR = 0x140000 in base-foc/src/core/target.inc. In Overo memory locations starting at 0x80000000. I think, I need to set LD_TEXT_ADDR = 0x80140000. I added it to base-foc/src/core/arm/target.inc but it applied for all ARM targets. How can I set it for my Overo target only?
I have to look into this problem more closely. Intuitively, I think it would be good to have a way for specifying core's virtual address in a 'spec-overo.mk' file.
In our project we use Chestnut43 board for prototype. This board has 4.3” LCD 480x272 pixels. I get display configuration for this LCD from Linux driver. Display work, picture is drawing, colors are correct. But sometimes window drawn with distortion (normal drawing http://dl.dropbox.com/u/8558928/pic_1.jpg , distorted: http://dl.dropbox.com/u/8558928/pic_2.jpg http://dl.dropbox.com/u/8558928/pic_3.jpg ). I can’t solve this yet. Maybe I have wrong configuration for display controller.
To me this looks like a typical cache artifact. The syscall bindings of Fiasco.OC provide several functions for dealing with caches on ARM platforms (i.e., see 'cache.h'). Those functions are unused by Genode until now because we haven't experienced such artifacts with the PBXA9 platform or Qemu. Maybe you could investigate if these cache-related functions are relevant to your problem and if so, how they could be put to use in a clean way within Genode?
Touchscreen based on ADS7846 chip. I wrote simple driver. How can I do calibration implementation? I see several solutions:
- Create new input event for touchscreen. Then driver can to send raw data
to Nitpicker. Nitpicker translate this data to screen coordinates use calibration constants which must be loaded from some application. 2. Calibration data must be loaded to the touchscreen driver. But how make it correct? Maybe you can to offer best solution?
The best would be to keep the concerns separated. So the touch-screen driver should produce raw data. Otherwise, we would lose information. Nitpicker, on the other hand, expects input events that are already calibrated with screen coordinates. Adding calibration support into nitpicker would make it more complex, which I'd like to avoid.
How about introducing a dedicated input-event-calibration component? This component would sit in-between the touch screen driver and nitpicker (this can easily be done using Genode's session-routing concept). This new component would get it calibration parameters from its config file, opens an input session (which gets routed to the touch-screen driver), and, in turn, announcing an input service itself (providing the calibrated input events). Thinking a bit further, this component could be implemented entirely generic. Its configuration would be the parameters of an affine transformation in the form of a matrix. So this component could then be used for arbitrary transformation of (absolute) input events. Do you like this idea?
For test platform I wrote simple Qt application. I don’t understand how to work resolution dependency. Where need I write all dependencies for automatic build my target. I used qt4/run/qt4.run as base for my run target. Execution “make run/qt_test” failed with error “cannot stat `bin/ dejavusans.lib.so'”. Not all libraries was built. If I execute “make run/qt4” previously then my qt_test built without error.*
Apparently, your test application is much simpler that the Qt4 application used in qt4.run. It does not need 'dejavusans.lib.so'. Therefore, the build system did not build this library (dependencies at work! .-) But in your run script, you still specify the file name 'dejavusans.lib.so' in the list of boot modules that should be put into the final image. Could you try to just remove this entry?
Again, congrats for your amazing work!
Cheers Norman
On 02/16/2012 02:27 PM, Stefan Kalkowski wrote:
Hi Michael,
On 16.02.2012 14:07, Michael Grunditz wrote:
Hi
I am also porting Genode to a new ARM board. Right now I am stuck with libc. It just halts with abort() as soon as it initiates. I cannot use GDB at this stage. Is there any other way of debuging this ? Adding prints to libc doesnt work since this problem is when libc init starts.
that sounds like you're getting an exception at that point.
I guessed that
If you're using the Fiasco.OC kernel together with Genode you can of course use it's included kernel-debugger, which is quiet feature-rich. You can invoke it by hand via the serial line by pressing escape, or you put a 'enter_kdebug("WAIT")' snippet at appropriate places of your code resp. libc initialization. Therefore you've to include the following before:
namespace Fiasco { #include <l4/sys/kdebug.h> }
I fear the kernel-debugger's usage isn't that self-explanatory, but you can start playing around after looking at the help-screen via '?'. Something often useful is dumping the stacktrace of a thread via 'bt'. To get meaningful symbols instead of plain addresses there is a small tool in 'base-foc/contrib/kernel/fiasco/tool/backtrace'.
Just so I understand this : backtrace is used like , cat bt.txt | ./backtrace /path/to/bin ? I am a little bit confused over which binary I should compare to , i.e. I don't get any match.
Hope that helps Regards Stefan
- snip -
On 02/16/2012 03:55 PM, Michael Grunditz wrote:
On 02/16/2012 02:27 PM, Stefan Kalkowski wrote:
Hi Michael,
On 16.02.2012 14:07, Michael Grunditz wrote:
Hi
I am also porting Genode to a new ARM board. Right now I am stuck with libc. It just halts with abort() as soon as it initiates. I cannot use GDB at this stage. Is there any other way of debuging this ? Adding prints to libc doesnt work since this problem is when libc init starts.
that sounds like you're getting an exception at that point.
I guessed that
If you're using the Fiasco.OC kernel together with Genode you can of course use it's included kernel-debugger, which is quiet feature-rich. You can invoke it by hand via the serial line by pressing escape, or you put a 'enter_kdebug("WAIT")' snippet at appropriate places of your code resp. libc initialization. Therefore you've to include the following before:
namespace Fiasco { #include <l4/sys/kdebug.h> }
I fear the kernel-debugger's usage isn't that self-explanatory, but you can start playing around after looking at the help-screen via '?'. Something often useful is dumping the stacktrace of a thread via 'bt'. To get meaningful symbols instead of plain addresses there is a small tool in 'base-foc/contrib/kernel/fiasco/tool/backtrace'.
Just so I understand this : backtrace is used like , cat bt.txt | ./backtrace /path/to/bin ? I am a little bit confused over which binary I should compare to , i.e. I don't get any match.
Ok so I got matches in the kernel. Thanks , at least it gives me a hint on where to start looking.
Hi Michael,
Just so I understand this : backtrace is used like , cat bt.txt | ./backtrace /path/to/bin ? I am a little bit confused over which binary I should compare to , i.e. I don't get any match.
by linking your program against the libc, it automatically becomes a dynamically-linked executable. Hence, the program's execution does not start at the program binary but inside the dynamic linker (ld.lib.so), which is then responsible to load the real binary along with the the shared libs. At the startup, all instruction pointers found in the backtrace are most likely referring to instructions inside 'ld.lib.so'.
Cheers Norman
On 16.02.2012 16:20, Michael Grunditz wrote:
On 02/16/2012 03:55 PM, Michael Grunditz wrote:
On 02/16/2012 02:27 PM, Stefan Kalkowski wrote:
Hi Michael,
On 16.02.2012 14:07, Michael Grunditz wrote:
Hi
I am also porting Genode to a new ARM board. Right now I am stuck with libc. It just halts with abort() as soon as it initiates. I cannot use GDB at this stage. Is there any other way of debuging this ? Adding prints to libc doesnt work since this problem is when libc init starts.
that sounds like you're getting an exception at that point.
I guessed that
If you're using the Fiasco.OC kernel together with Genode you can of course use it's included kernel-debugger, which is quiet feature-rich. You can invoke it by hand via the serial line by pressing escape, or you put a 'enter_kdebug("WAIT")' snippet at appropriate places of your code resp. libc initialization. Therefore you've to include the following before:
namespace Fiasco { #include <l4/sys/kdebug.h> }
I fear the kernel-debugger's usage isn't that self-explanatory, but you can start playing around after looking at the help-screen via '?'. Something often useful is dumping the stacktrace of a thread via 'bt'. To get meaningful symbols instead of plain addresses there is a small tool in 'base-foc/contrib/kernel/fiasco/tool/backtrace'.
Just so I understand this : backtrace is used like , cat bt.txt | ./backtrace /path/to/bin ? I am a little bit confused over which binary I should compare to , i.e. I don't get any match.
Ok so I got matches in the kernel. Thanks , at least it gives me a hint on where to start looking.
Sorry, I've missed you're debugging libc, which is a shared library, so its not in the binary anyway and backtrace won't work out of the box. I fear you've to do some steps manually: looking in which part (binary, ldso, libc, ...) the addresses from the stack lying in and using the offsets from their linking start-addresses to find the appropriate spot in the code.
Regards Stefan
Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
On 02/16/2012 04:27 PM, Stefan Kalkowski wrote:
On 16.02.2012 16:20, Michael Grunditz wrote:
On 02/16/2012 03:55 PM, Michael Grunditz wrote:
On 02/16/2012 02:27 PM, Stefan Kalkowski wrote:
Hi Michael,
On 16.02.2012 14:07, Michael Grunditz wrote:
Hi
I am also porting Genode to a new ARM board. Right now I am stuck with libc. It just halts with abort() as soon as it initiates. I cannot use GDB at this stage. Is there any other way of debuging this ? Adding prints to libc doesnt work since this problem is when libc init starts.
that sounds like you're getting an exception at that point.
I guessed that
If you're using the Fiasco.OC kernel together with Genode you can of course use it's included kernel-debugger, which is quiet feature-rich. You can invoke it by hand via the serial line by pressing escape, or you put a 'enter_kdebug("WAIT")' snippet at appropriate places of your code resp. libc initialization. Therefore you've to include the following before:
namespace Fiasco { #include <l4/sys/kdebug.h> }
I fear the kernel-debugger's usage isn't that self-explanatory, but you can start playing around after looking at the help-screen via '?'. Something often useful is dumping the stacktrace of a thread via 'bt'. To get meaningful symbols instead of plain addresses there is a small tool in 'base-foc/contrib/kernel/fiasco/tool/backtrace'.
Just so I understand this : backtrace is used like , cat bt.txt | ./backtrace /path/to/bin ? I am a little bit confused over which binary I should compare to , i.e. I don't get any match.
Ok so I got matches in the kernel. Thanks , at least it gives me a hint on where to start looking.
Sorry, I've missed you're debugging libc, which is a shared library, so its not in the binary anyway and backtrace won't work out of the box. I fear you've to do some steps manually: looking in which part (binary, ldso, libc, ...) the addresses from the stack lying in and using the offsets from their linking start-addresses to find the appropriate spot in the code.
Regards Stefan
Maybe it actually went a little bit further then I thought ...
Ok , btw this is the output:
[init -> test-libc] C++ runtime: int [init -> test-libc] void* abort(): abort called
I guess I get into c++ runtime, but what does "int" mean in this context ? When I start test-libc in linux/genode I get : [init -> test-libc] Starting ldso ... as the first output.
/Michael
Hello again,
[init -> test-libc] C++ runtime: int [init -> test-libc] void* abort(): abort called
I guess I get into c++ runtime, but what does "int" mean in this context ? When I start test-libc in linux/genode I get : [init -> test-libc] Starting ldso ... as the first output.
it looks like the early exception-handling check in our startup code fails for you. Please see the comment at line 230 in '_main.cc':
https://github.com/genodelabs/genode/blob/master/base/src/platform/_main.cc
It is strange though, that this check does not trigger for normal statically-linked programs but only for 'ld.lib.so'. Maybe there is a subtle difference between the normal linker script and the one used for 'ld.lib.so'?
BTW, are you using the official Genode tool chain?
Norman
On 02/16/2012 06:08 PM, Norman Feske wrote:
Hello again,
[init -> test-libc] C++ runtime: int [init -> test-libc] void* abort(): abort called
I guess I get into c++ runtime, but what does "int" mean in this context ? When I start test-libc in linux/genode I get : [init -> test-libc] Starting ldso ... as the first output.
it looks like the early exception-handling check in our startup code fails for you. Please see the comment at line 230 in '_main.cc':
https://github.com/genodelabs/genode/blob/master/base/src/platform/_main.cc
It is strange though, that this check does not trigger for normal statically-linked programs but only for 'ld.lib.so'. Maybe there is a subtle difference between the normal linker script and the one used for 'ld.lib.so'?
I have not changed them. The only system level thing I changed was the core memory address. The system memory available starts at *0x90000000 so core is above that. Could that mean that there is a unresolved memory conflict* ?
And yes static programs works.
BTW, are you using the official Genode tool chain?
Yes I am.
Hi,
On 02/16/2012 06:40 PM, Michael Grunditz wrote:
On 02/16/2012 06:08 PM, Norman Feske wrote:
Hello again,
[init -> test-libc] C++ runtime: int [init -> test-libc] void* abort(): abort called
I guess I get into c++ runtime, but what does "int" mean in this context ? When I start test-libc in linux/genode I get : [init -> test-libc] Starting ldso ... as the first output.
it looks like the early exception-handling check in our startup code fails for you. Please see the comment at line 230 in '_main.cc':
https://github.com/genodelabs/genode/blob/master/base/src/platform/_main.cc
It is strange though, that this check does not trigger for normal statically-linked programs but only for 'ld.lib.so'. Maybe there is a subtle difference between the normal linker script and the one used for 'ld.lib.so'?
I have not changed them. The only system level thing I changed was the core memory address. The system memory available starts at *0x90000000 so core is above that. Could that mean that there is a unresolved memory conflict* ?
And yes static programs works.
This behavior could be caused by a couple of issues. Can you check if ' dl_unwind_find_exid' does return something meaningful in 'os/src/lib/ldso/arm/platform.c'? This function is used by the libgcc_eh code in the dynamic case only. Second, can you send me the output of 'objdump -R bin/libc.lib.so' and the same for 'bin/test-libc'. There could be some unsupported relocation types. What EABI version is this platform using?
Greetings,
Sebastian
Hallo
This is absolutely correct. I made some experiments and solved part of problem with l4_cache_clean_data(). I putted it to refresh(int x, int y, int w, int h) (( os/src/server/nitpicker/genode/main.cc +441 )) and anytime, when i called nitpicker->frame buffer()->refresh() i got very good image. (But in my tests i used hardcoded pointer to data with pixels, because pixels[] inside np_test and local_addr was a different ) But, while i dragged image i got again noise on image. if i stopped (i guess refresh happend after stop) and i have again clear image. So, Norman, can You please put cache_clean_data to the right place inside nitpicker for flushing cache pixels on every pixels writing?
In our project we use Chestnut43 board for prototype. This board has 4.3” LCD 480x272 pixels. I get display configuration for this LCD from Linux driver. Display work, picture is drawing, colors are correct. But sometimes window drawn with distortion (normal drawing http://dl.dropbox.com/u/8558928/pic_1.jpg , distorted: http://dl.dropbox.com/u/8558928/pic_2.jpg http://dl.dropbox.com/u/8558928/pic_3.jpg ). I can’t solve this yet. Maybe I have wrong configuration for display controller.
To me this looks like a typical cache artifact. The syscall bindings of Fiasco.OC provide several functions for dealing with caches on ARM platforms (i.e., see 'cache.h'). Those functions are unused by Genode until now because we haven't experienced such artifacts with the PBXA9 platform or Qemu. Maybe you could investigate if these cache-related functions are relevant to your problem and if so, how they could be put to use in a clean way within Genode?
Hi Vasily,
This is absolutely correct. I made some experiments and solved part of problem with l4_cache_clean_data(). I putted it to refresh(int x, int y, int w, int h) (( os/src/server/nitpicker/genode/main.cc +441 )) and anytime, when i called nitpicker->frame buffer()->refresh() i got very good image. (But in my tests i used hardcoded pointer to data with pixels, because pixels[] inside np_test and local_addr was a different ) But, while i dragged image i got again noise on image. if i stopped (i guess refresh happend after stop) and i have again clear image. So, Norman, can You please put cache_clean_data to the right place inside nitpicker for flushing cache pixels on every pixels writing?
great! I think, however, that nitpicker's refresh function is the wrong place where to put the cache-flushing code. This function gets called each time a nitpicker client tells nitpicker about changes in the client-specific pixel buffer. But you want to do the cache-flushing even in the event that the client just moves any of its views w/o changing any buffer content.
I think the cache flushing would be more appropriate to do in the framebuffer driver's refresh function. (i.e., os/src/drivers/framebuffer/omap3fb/main.cc line 345) This function is called each time nitpicker changes pixels on screen. I suspect that putting the cache-flushing code to this place is even more straight-forward because the knowledge about the addresses to flush is right there. Could you give it a try?
Once you get it working, it would be nice if you push your branch to some public location (GitHub) so that I can have a look. The remaining challenge on my side is to find a way to add the cache-flushing facility in such a way that the driver retains its independence from the kernel interface. Certainly, a new API is needed for that.
Cheers Norman