MMU issues with AM335X and 14.05
Martin Stein
martin.stein at ...1...
Tue Aug 19 14:42:57 CEST 2014
Hi Bob,
For a first idea of mine, could you please execute the following command
on your Genode branch:
git log | grep "get a thread cap in Thread_base constructor"
Recently, there was a bug in the thread back-end of base-hw that
triggered errors like yours. The above command should print the headline
of two corresponding commits (we had to fix it twice because of
conversation problems).
Furthermore, the method 'Kernel::Thread::_print_activity_table()' in
'base-hw/src/core/kernel/thread.cc' may help you. It prints all threads
that are registered at the kernel. However, you should call it when
you're inside the kernel. I suggest to call it at the beginning or the
end (not in between as this might corrupt thread states) of an
appropriate syscall back end ('Kernel::Thread::_call_*' in
'base-hw/src/core/kernel/thread.cc'). By instrumenting the syscall back
ends you can also easily trace when which thread gets created.
Cheers,
Martin
On 18.08.2014 22:33, Bob Stewart wrote:
> Martin,
> It appears that an Rpc entrypoint thread is successfully created (with
> id 0x0f) in the Platform_thread constructor, but when the thread is
> started a call to acess thread registers fails because an object with
> that id cannot be found in the thread object pool.
>
> Bob
>
> On 08/18/2014 11:33 AM, Bob Stewart wrote:
>> Martin,
>> With the pull from early last week (before Norman's announcement
>> of some of the commits for 14.08), the issue with the ROM fs
>> initialization has gone. The memory allocation looks correct, as far
>> as I can tell:
>>
>> Core virtual memory allocator
>> ---------------------
>> Allocator 810d1850 dump:
>> Block: [00001000,00002000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00002000,00003000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00003000,00004000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00004000,00005000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00005000,00006000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00006000,00007000) size=00001000 avail=00000000
>> max_avail=80ff3000
>> Block: [00007000,00008000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00008000,00009000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [00009000,0000a000) size=00001000 avail=00000000
>> max_avail=80ff3000
>> Block: [0000a000,0000b000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [0000b000,0000c000) size=00001000 avail=00000000
>> max_avail=80ff3000
>> Block: [0000c000,0000d000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [0000d000,81000000) size=80ff3000 avail=80ff3000
>> max_avail=80ff3000
>> Block: [8139d000,ffff0000) size=7ec53000 avail=7ec53000
>> max_avail=7ec53000
>> => mem_size=4291108864 (4092 MB) / mem_avail=4291059712 (4092 MB)
>>
>> RAM memory allocator
>> ---------------------
>> Allocator 810d07f4 dump:
>> Block: [80000000,80001000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80001000,80002000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80002000,80003000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80003000,80004000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80004000,80005000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80005000,80006000) size=00001000 avail=00000000
>> max_avail=1ec63000
>> Block: [80006000,80007000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80007000,80008000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [80008000,80009000) size=00001000 avail=00000000
>> max_avail=1ec63000
>> Block: [80009000,8000a000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [8000a000,8000b000) size=00001000 avail=00000000
>> max_avail=1ec63000
>> Block: [8000b000,8000c000) size=00001000 avail=00000000
>> max_avail=00000000
>> Block: [8000c000,81000000) size=00ff4000 avail=00ff4000
>> max_avail=1ec63000
>> Block: [8139d000,a0000000) size=1ec63000 avail=1ec63000
>> max_avail=1ec63000
>> => mem_size=533082112 (508 MB) / mem_avail=533032960 (508 MB)
>>
>> IO memory allocator
>> -------------------
>> Allocator 810d28b8 dump:
>> Block: [44c00000,44e05000) size=00205000 avail=00205000
>> max_avail=00205000
>> Block: [44e06000,44e09000) size=00003000 avail=00003000
>> max_avail=00205000
>> Block: [44e0a000,45000000) size=001f6000 avail=001f6000
>> max_avail=001f6000
>> Block: [47400000,47820000) size=00420000 avail=00420000
>> max_avail=00dff000
>> Block: [48000000,48200000) size=00200000 avail=00200000
>> max_avail=00dff000
>> Block: [48201000,49000000) size=00dff000 avail=00dff000
>> max_avail=00dff000
>> => mem_size=25284608 (24 MB) / mem_avail=25284608 (24 MB)
>>
>> IRQ allocator
>> -------------------
>> Allocator 810d3914 dump:
>> Block: [00000000,0000007f) size=0000007f avail=0000007f
>> max_avail=0000007f
>> => mem_size=127 (0 MB) / mem_avail=127 (0 MB)
>>
>> ROM filesystem
>> --------------
>> Rom_fs 810d4954 dump:
>> Rom: [810fb000,8113a9e8) init
>> Rom: [81175000,811b1ad0) bbb_platform_client
>> Rom: [81275000,812a9980) bbb_heart_beat_led
>> Rom: [81361000,8139bb30) Autopilot
>> Rom: [81233000,812740c8) ctl_module_drv
>> Rom: [8139c000,8139ca80) config
>> Rom: [811b2000,811f445c) gpio_drv
>> Rom: [812aa000,812e8174) pwm_drv
>> Rom: [8113b000,81174b6c) platform_drv
>> Rom: [811f5000,81232720) timer
>> Rom: [8132a000,81360944) sd_card_bench
>> Rom: [812e9000,81329164) uart_drv
>>
>> I'm now into some threading issues which I'll pursue today (I get the
>> messages unknown thread, failed to initialize thread registers, and
>> failed to start thread after the ROM initialization).
>>
>> Looking at all of the changes in the current master git repository,
>> in the area of code I've just gone through, I think I should just do
>> another pull from master, unless the there are more commits coming in
>> this area before the official release of 14.08. I'll find out what's
>> going with this thread issue anyway.
>>
>> Bob
>>
>>
>> On 08/17/2014 06:56 PM, Bob Stewart wrote:
>>> Oops, looked like I stomped on the closing brace in the Namespace
>>> Arm declaration in .../core/processor_driver/arm.h during a merge.
>>> Not sure why the compiler didn't catch that, but the core library
>>> now builds. I've one issue with an application module which I'll fix
>>> tomorrow then get to test the kernel start up again.
>>>
>>> Thanks,
>>> Bob
>>>
>>>
>>> On 08/17/2014 04:51 PM, Bob Stewart wrote:
>>>> Yes, Martin, I did a make clean after the pull. Still looking as to
>>>> why the Arm namespace shows up above Genode. Let you when I find it.
>>>>
>>>> Bob
>>>>
>>>> On 08/17/2014 03:27 PM, Martin Stein wrote:
>>>>> Btw. hope you have done 'make clean' after your Makefile changes.
>>>>> Otherwise this could cause the build system to act strange.
>>>>>
>>>>> On 17.08.2014 21:16, Martin Stein wrote:
>>>>>> Hi Bob,
>>>>>> Looks like there's a problem with the name spaces. Instead of
>>>>>> being comprised by 'Arm', 'Genode' should be a top-level name
>>>>>> space as well as 'Kernel'. I think it would be a good idea to
>>>>>> investigate why the 'Arm' prefix is active at all at
>>>>>> 'double_list.h:95'.
>>>>>>
>>>>>> Martin
>>>>>>
>>>>>> On 17.08.2014 18:57, Bob Stewart wrote:
>>>>>>> Hi Martin,
>>>>>>> Been out-of-town for the past few days.
>>>>>>>
>>>>>>> I understand the changes due to issue 1199 and I did create the
>>>>>>> core library make file (in base-hw/lib/mk/platform_bbb) for the
>>>>>>> platform I'm working with before I left. Everything built ok
>>>>>>> until it got to /cpu_session_component.c/ in the core library
>>>>>>> build section where multiple errors occurred. The following
>>>>>>> error would indicate I'm missing something fundamental:
>>>>>>>
>>>>>>> In file included from
>>>>>>> /Work/Genode/genode-14.05/repos/base-hw/src/core/kernel/scheduler.h:19:0,
>>>>>>> from
>>>>>>> /Work/Genode/genode-14.05/repos/base-hw/src/core/kernel/processor.h:21,
>>>>>>> from
>>>>>>> /Work/Genode/genode-14.05/repos/base-hw/src/core/kernel/thread.h:21,
>>>>>>> from
>>>>>>> /Work/Genode/genode-14.05/repos/base-hw/src/core/include/platform_thread.h:29,
>>>>>>> from
>>>>>>> /Work/Genode/genode-14.05/repos/base/src/core/include/cpu_session_component.h:27,
>>>>>>> from
>>>>>>> /Work/Genode/genode-14.05/repos/base/src/core/cpu_session_component.cc:21:
>>>>>>> /Work/Genode/genode-14.05/repos/base-hw/src/core/kernel/double_list.h:
>>>>>>> In member function ???void
>>>>>>> Arm::Kernel::Double_list<T>::insert_tail(Arm::Kernel::Double_list<T>::Item*)???:
>>>>>>> /Work/Genode/genode-14.05/repos/base-hw/src/core/kernel/double_list.h:95:4:
>>>>>>> error: ???printf??? is not a member of ???Arm::Genode???
>>>>>>>
>>>>>>>
>>>>>>> Regarding the page fault in the ROM fs initialization, I did
>>>>>>> not isolate the route cause -- it appeared to be getting a low
>>>>>>> address of 0x1000 and try to create a translation table entry
>>>>>>> with that. I thought I saw some changes in the ROM area in the
>>>>>>> pull, so I was going to get a clean build and then try again.
>>>>>>>
>>>>>>> Bob
>>>>>>>
>>>>>>>
>>>>>>> On 08/13/2014 05:12 AM, Martin Stein wrote:
>>>>>>>> Hey Bob,
>>>>>>>>
>>>>>>>> The changes you're talking about originate from this issue:
>>>>>>>> https://github.com/genodelabs/genode/issues/1199. Core now
>>>>>>>> consists of a generic 'base-hw/src/core/target.mk' that solely
>>>>>>>> defines the target name and a dependency to the library 'core'.
>>>>>>>> All the other content that core is composed of resides in the
>>>>>>>> variant 'core.mk' and 'core.inc' library-description files
>>>>>>>> within 'base-hw/lib/mk' and its sub-directories (respectively
>>>>>>>> 'core-*.mk' and 'core-*.inc' for libraries that are additions
>>>>>>>> to the core library).
>>>>>>>> At the one hand these changes reduce redundancy and LOC count
>>>>>>>> as hardware-specifics were split up more fine-grained when
>>>>>>>> transfered into libraries, at the other hand we unified the
>>>>>>>> scheme of handling orthogonal specifiers (see for example the
>>>>>>>> core-trustzone* files that provide optional trustzone support
>>>>>>>> for different platform specifiers). Apart from that, the
>>>>>>>> commits didn't change that much regarding the substance of core.
>>>>>>>> I hope this short insight helps you applying your changes to
>>>>>>>> the current state. If you have further questions on this don't
>>>>>>>> hesitate to ask.
>>>>>>>>
>>>>>>>> Regarding the page fault: Does that mean that you were able to
>>>>>>>> fix the fault?
>>>>>>>>
>>>>>>>> Cheers
>>>>>>>> Martin
>>>>>>>>
>>>>>>>> On 12.08.2014 23:41, Bob Stewart wrote:
>>>>>>>>> Martin,
>>>>>>>>> Have not been able to build with the pull from the master
>>>>>>>>> branch. Looks like there are changes to base-hw build that
>>>>>>>>> I've not seen before. The platform target.mk file now appear
>>>>>>>>> in a repos//base-hw/lib/mk/ directory as core.mk. Is there
>>>>>>>>> documentation on the changes? I searched the git repository
>>>>>>>>> but couldn't find any.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The mmu faulting address was coming from the creation of
>>>>>>>>> Rom_modules in the ROM fs.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Bob
>>>>>>>>>
>>>>>>>>> On 08/11/2014 09:03 AM, Bob Stewart wrote:
>>>>>>>>>>
>>>>>>>>>> Thanks for the quick reply, Martin.
>>>>>>>>>>
>>>>>>>>>> I'll pull the current master branch tomorrow and let you know
>>>>>>>>>> if it fixes my issue.
>>>>>>>>>>
>>>>>>>>>> Thanks for the debugging tip on core faults.
>>>>>>>>>> My core-only mmio regions are the same as they were in 14.02
>>>>>>>>>> and unless the handling of the region has changed I should
>>>>>>>>>> have the correct translation table entries. My PDBG output
>>>>>>>>>> from the _mmu_exception method was:
>>>>>>>>>>
>>>>>>>>>> /void Kernel::Thread::_mmu_exception(): f_addr 0x1008
>>>>>>>>>> f_writes 0x1 f_pd 0x813d6004 f_signal 0x0 label core//
>>>>>>>>>> /
>>>>>>>>>> Looks like I've a problem with the fault address, so I'll
>>>>>>>>>> keep digging to see where that is coming from.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Bob
>>>>>>>>>> On 08/11/2014 07:54 AM, Martin Stein wrote:
>>>>>>>>>>> Hi Bob,
>>>>>>>>>>>
>>>>>>>>>>> On 09.08.2014 22:21, Bob Stewart wrote:
>>>>>>>>>>>> I went back to the 14.05 issues today and found I could get the kernel
>>>>>>>>>>>> initialization to complete successfully if I reverted the S bit to
>>>>>>>>>>>> "unshared" in the memory attributes in a Section entry create. Prior to
>>>>>>>>>>>> 14.05 this bit was set to "unshared" and was presumably changed in 14.05
>>>>>>>>>>>> to allow for multiple processors accessing the same memory regions.
>>>>>>>>>>> We had an issue (https://github.com/genodelabs/genode/issues/1181)
>>>>>>>>>>> recently that the shared-bit should be set only when using SMP. The
>>>>>>>>>>> related changes are in the current state of our master branch
>>>>>>>>>>> (https://github.com/genodelabs/genode/tree/master) but not in version
>>>>>>>>>>> 14.05. Could you please give it a try?
>>>>>>>>>>>
>>>>>>>>>>>> In addition, after completing kernel initialization, core's "main"
>>>>>>>>>>>> function is entered, the info message for creating local services shows
>>>>>>>>>>>> up, a translation for the top of RAM (0x80000000) is created, then the
>>>>>>>>>>>> message "failed to communicate thread event" occurs and init is never
>>>>>>>>>>>> called. Any thoughts on why that message is appearing would be
>>>>>>>>>>>> appreciated. It appears to be coming from initialization of the root
>>>>>>>>>>>> interfaces.
>>>>>>>>>>> This seems to be a page fault in core. Normally core should never
>>>>>>>>>>> trigger a page fault because there's no one to handle it. So the kernel
>>>>>>>>>>> doesn't know who to inform about it and thus prints this message. To
>>>>>>>>>>> prevent this situation, memory regions statically needed by core
>>>>>>>>>>> (program image, MMIO regions) get mapped 1:1 in the 'Core_pd'
>>>>>>>>>>> constructor in 'base-hw/src/core/kernel/kernel.cc' using, among others,
>>>>>>>>>>> the platform specific method 'Platform::_core_only_mmio_regions'. I
>>>>>>>>>>> assume that your port misses a region in this function. You can get
>>>>>>>>>>> further information about the page fault by printing things like
>>>>>>>>>>> '_fault_addr', '_fault_writes', 'char const * Thread::label()', and
>>>>>>>>>>> 'unsigned long Thread::ip' in the method 'Thread::_mmu_exception()' in
>>>>>>>>>>> 'base-hw/src/core/arm/cpu_support.cc' right before '_fault.submit()'.
>>>>>>>>>>>
>>>>>>>>>>> Cheers
>>>>>>>>>>> Martin
>>>>>>>>>>>
>>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> genode-main mailing list
>>>>>>>>>>> genode-main at lists.sourceforge.net
>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> genode-main mailing list
>>>>>>>>> genode-main at lists.sourceforge.net
>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> genode-main mailing list
>>>>>>>> genode-main at lists.sourceforge.net
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------------------------------------------------------
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> genode-main mailing list
>>>>>>> genode-main at lists.sourceforge.net
>>>>>>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> genode-main mailing list
>>>>>> genode-main at lists.sourceforge.net
>>>>>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> genode-main mailing list
>>>>> genode-main at lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/genode-main
>>>>
>>>
>>
>
>
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> genode-main mailing list
> genode-main at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/genode-main
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.genode.org/pipermail/users/attachments/20140819/5982e2e8/attachment.html>
More information about the users
mailing list