Erant | Anyone familiar with the minsoc? | 01:07 |
---|---|---|
Erant | I'm trying to get it to work with my Atlys board, and the GENERIC_TAP, but it's throwing out errors about tap_top and adbg_top | 01:08 |
Erant | I tried adding tap_top.v to the list of Verilog files in minsoc_top.prj, and I added jtag_top.xst to the list of synthesis files, but still: | 01:08 |
Erant | ERROR:HDLCompiler:1654 - "/home/parallels/minsoc/prj/../rtl/verilog/minsoc_top.v" Line 505: Instantiating <tap_top> from unknown module <tap_top> | 01:08 |
Erant | I'm using XST 14.3, under Linux. | 01:09 |
Erant | It works (or at least synthesizes fine), if I select the internal scan variant. | 01:09 |
Erant | It looks like the tap_top.v needs to be one of the blackboxes. | 02:12 |
waz | Hi. | 02:29 |
waz | Is there any plan for a 64-bit implementation? | 02:30 |
waz | On the other side, which fmax have you reached in an FPGA? | 02:32 |
juliusb | other side hey? | 02:36 |
juliusb | hehe | 02:36 |
juliusb | i didn't realise the otherside of the 64-bit implementation question was maximum frequency, an interesting take | 02:36 |
juliusb | anyway, no 64-bit implementation | 02:36 |
juliusb | you'd need the compiler to support it, as well as the SOC you're running on | 02:36 |
juliusb | no one's done it, no real need | 02:37 |
juliusb | maximum frequency, that's another question | 02:39 |
juliusb | it depends on the RTL implementation, the FPGA, the constraints | 02:39 |
juliusb | fastest on cheap commodity FPGAs like spartan 6 is about 80-100MHz | 02:39 |
juliusb | Erant: sorry, not sure about that myself | 02:40 |
juliusb | perhaps a module is missing, the actual TAP | 02:40 |
Erant | juliusb: It looks like the tap was missing, but adding it wasn't as obvious to me as it should've been. | 02:41 |
Erant | It's synthesizing now. | 02:41 |
Erant | We'll see if it works | 02:41 |
waz | Sorry, I should have said "on the other hand" or moreover. | 02:51 |
Erant | We'll see if it works, if the damned physical synthesis didn't hang. | 02:52 |
waz | I would want to know the possible fmax on both low-end and high-end Altera devices. I'm doing a comparison between OpenRISC and Nios II. | 02:54 |
Erant | waz: fmax doesn't tell you much. You're probably much more interested in the MIPS, for example. | 02:56 |
Erant | Or FLOPS | 02:56 |
waz | I know that fmax is not enough for comparison purposes, but it tells me something about the optimization of the pipelines. I prefer CoreMark. | 02:56 |
Erant | Anyway, I just synthed minsoc (a minimal SoC based on the OpenRISC), and the synthesizer says ~125MHz on a Spartan6. | 02:57 |
waz | Pretty good I think. For example, if I remember well, the Leon-3 can't pass the 100 MHz in low-end devices. | 03:05 |
Erant | I wouldn't call the Spartan6 truly low-end (Though I should try and run this for my Virtex-4 and Stratix-II) | 03:07 |
waz | Cyclones and Spartans are low-end, according to their manufacturers. | 03:10 |
Erant | Mjeh, fine. | 03:12 |
waz | Why there's no real need for 64-bit computing? | 03:17 |
juliusb | there's no real need for a 64-bit OpenRISC | 03:17 |
juliusb | I don't see the application | 03:18 |
juliusb | it just makes it bigger, probably slower | 03:18 |
juliusb | my focus is on embedded computing, anyway | 03:18 |
juliusb | i don't see any use | 03:18 |
waz | No "current need" perhaps, but we better prepare for the next challenges of tomorrow. | 03:19 |
waz | Larger address spaces come to my mind. | 03:20 |
waz | Take as an example that Altera has expanded the support in its system generator (QSys) for address buses larger than 32 bits. | 03:21 |
waz | RAM is pretty cheap these days. | 03:21 |
waz | Also, how can we know how much slower it will be, if it's been never implemented? | 03:24 |
waz | (OK, fmax goes down, but still it's worth to try) | 03:25 |
Erant | waz: More memory can always be solved with a clever MMU. | 04:06 |
waz | Simple is almost always better. Look at PAE in x86. Barely used. Programmers prefer a linear address space. | 04:34 |
waz | More memory could possibly supress the need for demand-paging in many systems. | 04:40 |
waz | (I'm starting to ramble) | 04:41 |
stekern | waz: if you want coremark scores, this is mor1kx running @80 MHz on an atlys board (spartan6) | 05:38 |
stekern | http://pastebin.com/Lxpd9Nd1 | 05:38 |
stekern | I get the same result running 80 MHz on a de0 nano board (cyclone IV) | 05:38 |
stekern | I don't agree with juliusb that there aren't applications for 64-bit, it's just that it hasn't scratched anyones itch enough to implement it | 05:42 |
stekern | I mean if there aren't applications, why is there 64-bit versions of MIPS? | 05:42 |
waz | stekern: Thanks for the info. Those 80MHz were the maximum achievable? | 05:43 |
stekern | yes, at least in orpsoc | 05:44 |
stekern | that's a lot better than or1200 though, that doesn't go over 50MHz | 05:44 |
stekern | (don't know how Erant got it to go to 125MHz on minsoc, that doesn't sound feasible) | 05:45 |
stekern | waz: I'm hoping to further improve both the fmax and MIPS on the mor1kx cappuccino pipeline though | 05:51 |
stekern | would be interesting to see a comparison with Nios II in an otherwise identical soc | 05:54 |
stekern | My guess is that it's faster (both fmax and mips wise), but that's a subject to (hopefully) change ;) | 05:57 |
stekern | they have the advantage of being a (vendorspecific) FPGA only implementation | 05:58 |
stekern | that allow them doing a couple of shortcuts, that we try to avoid | 05:59 |
Erant | stekern: Eh, it's what the synthesizer put out. We'll see after PAR | 06:10 |
Erant | Which, for some reason, is taking a _really_ long time... | 06:12 |
Erant | Like, I kicked it off an hour ago. | 06:12 |
Erant | And it's in Phase 5 of the PAR | 06:13 |
Erant | Total REAL time to MAP completion: 57 mins 48 secs | 06:14 |
Erant | That's on a Linux VM with 2 cores assigned to it, Intel i7 at 3.4GHz, and 4GB RAM. | 06:14 |
Erant | And it's not even done PARing | 06:15 |
waz | stekern: I think that we will need to compare the complete systems under similar loads. With "load" I mean without inadvertently congesting the generated netlist and provoking lower fmax (after adding peripherals, etc). | 06:15 |
waz | May I ask what subsets of the specification are implemented in minsoc? | 06:16 |
Erant | waz: It's the same core. The SoC surrounding it is smaller. | 06:17 |
Erant | or1200, that is | 06:18 |
stekern | mor1kx cappuccino is pretty similiar to or1200 in what it is implemented, the biggest thing missing atm is MMUs | 06:20 |
stekern | (i.e. or1200 has them, mor1kx doesn't) | 06:20 |
stekern | or1200 has FPU too, where mor1kx doesn't | 06:21 |
stekern | http://opencores.org/or1k/OR1K_CPU_Cores | 06:22 |
stekern | there's some descriptions on the different implementations | 06:23 |
waz | It only contains ORBIS32 then. With MMU and FPU the fmax should be lower, so a full or1200 is slower. | 06:26 |
waz | At least, I guess that. | 06:27 |
stekern | not necessarily | 06:31 |
stekern | the fpu can be pretty decoupled from the critical paths (at least I imagine it could) | 06:31 |
waz | In an ideal world. However, even manufacturers are afraid of adding FPUs because of critical paths (Microblaze and Nios II are the proof). | 06:34 |
stekern | is that really the reason? | 06:35 |
stekern | I mean, if that's the case, wouldn't that be a trade-off choice for the user then | 06:36 |
waz | Oh no, it's not the reason. My point is with a FPU, high are the chances that fmax will descend. Even highly optimized and specific implementations of the FPGA vendors don't do it (although definitely, marketing reasons count for them, at least for clients that got excited by higher frequencies). | 06:46 |
Erant | Total REAL time to PAR completion: 45 mins 48 secs <-- There has to be something wrong here :/ | 07:03 |
Erant | MAP + PAR ended up being an hour and 40 minutes. | 07:04 |
stekern | Erant: it usually takes ~25 min to fully synth+par orpsoc for atlys at my ws | 07:36 |
stekern | actually or1200 has a bit higher fmax than I thought on cyclone iv when MMUs are disabled, I get around 72 MHz using the same setup as for mor1kx cappuccino (where I got 80 MHz) | 07:43 |
stekern | waz: and probably the demand for it doesn't live up to the cost of implementing one | 07:47 |
stekern | with mmus enabled or1200 get around 62 MHz on the cyclone iv | 07:51 |
waz | stekern: with both MMU and FPU enabled? | 07:52 |
stekern | no, only MMUs | 07:52 |
stekern | FPU has been disabled on all runs | 07:53 |
stekern | I'm running a MMUs disabled FPU enabled test run now | 07:53 |
waz | Only ORBIS32? | 07:53 |
waz | If that's the case, there is a bottleneck there. | 07:54 |
stekern | in the MMU? | 07:54 |
waz | No, in the pipeline. And also I forgot if both I and D caches are included. | 07:54 |
stekern | both are included (both on or1200 and mor1kx cappuccino) | 07:55 |
waz | How large is the cost in the fmax of adding precise interrupts (or the whole interrupt subsystem)? If it can be disabled. | 07:57 |
waz | for testing. | 07:57 |
stekern | haven't tested | 07:58 |
waz | It could be an interesting test. | 07:58 |
stekern | you still have exceptions that you really can't disable though | 07:58 |
stekern | well you could, but... | 07:58 |
waz | Sure, you'll need to do heavy modifications. | 07:59 |
waz | How many stage it has? | 07:59 |
waz | pipeline stages | 07:59 |
stekern | I reckon there's some critical paths there, but not much you can do about them if you want it to be usable still ;) | 07:59 |
stekern | or1200 has 4, mor1kx cappuccino has 6 | 07:59 |
stekern | ... so there's a lot more room to move things around in cappuccino to resolve the bottlenecks | 08:00 |
waz | 4 is too low. | 08:00 |
waz | Definitely a very critical path has formed. | 08:01 |
stekern | I agree, but it depends on your application | 08:01 |
stekern | shorter pipeline => you can get away with smaller implementation | 08:01 |
waz | Do you refer to trading area vs timing? | 08:01 |
stekern | ... something or1200 fails to do though ;) | 08:01 |
waz | OK. I got it. Still I think that some people may want maximum performance. | 08:02 |
stekern | I agree, that's why I'm working on that on the mor1kx cappuccino | 08:02 |
waz | Even Nios II that is has almost the same features has 5 stages in the minimum pipelined version. | 08:02 |
stekern | that's what great about the mor1kx, it's pretty easy to modify the pipelines in it | 08:03 |
stekern | juliusb (the mor1kx founder) has a 3-stage pipeline version of it as well | 08:04 |
waz | May I ask something about an specific point? I'm thinking about how costly is to implement load with a granularity lower than 32-bits. | 08:04 |
waz | I mean, byte and half-word loads. | 08:05 |
waz | You need to mux to select the value you just got from the cache (assume a hit). | 08:05 |
stekern | actually, that's a pretty interesting thought | 08:06 |
stekern | it is pretty critical path, I agree | 08:06 |
stekern | you could make byte and half-word accesses slower (i.e. registering those results) and possibly gain something there | 08:07 |
stekern | something that should be pretty easy to test too | 08:07 |
stekern | ok, FPU enabled and MMU disabled results in an fmax of 73 MHz (so 1 MHz faster than without) | 08:09 |
waz | I found out some days ago that the Alpha architecture only allows 32-bit and 64-bit loads (being a 64-bit architecture). DEC people realized that smaller loads would be too costly for its superscalar implementation. Here we talk about scalar ones of course, but still -as you say- it's quite costly. | 08:10 |
waz | I was almost shocked about this clever choice. | 08:11 |
waz | stekern, the results are interesting. Evidence that synthesis realms are sometimes mysterious. | 08:14 |
stekern | yeah, I wouldn't read into that 1 MHz increase too much ;) | 08:15 |
waz | How larger is the area? | 08:15 |
stekern | I didn't look | 08:16 |
stekern | but IIRC, the FPU is pretty large | 08:16 |
waz | Ah OK. | 08:16 |
stekern | could possibly be implemented smaller/better too | 08:16 |
stekern | I think juliusb just took an existing FPU core and hooked it up to or1200 | 08:17 |
stekern | he can correct me if I'm wrong ;) | 08:17 |
waz | It would be interesting to see the STA report. | 08:19 |
stekern | mmus enabled and fpu enabled => 65 MHz | 08:20 |
waz | You should publish online some runs, given how long it takes. | 08:20 |
waz | That's pretty much expected. | 08:21 |
stekern | http://oompa.chokladfabriken.org/tmp/orpsoc.sta.rpt | 08:23 |
stekern | that's from the last run, with mmus and fpu enabled | 08:23 |
waz | Thank you. Question: why don't you enable parallel compilation? | 08:27 |
stekern | here's the map report: http://oompa.chokladfabriken.org/tmp/orpsoc.map.rpt | 08:28 |
stekern | |or1200_fpu:or1200_fpu | 4071 (44) ; 1263 (37) | 08:28 |
stekern | about as large as the rest of the cpu ;) | 08:29 |
stekern | parallel compilation, that's not available in the web version, no? | 08:29 |
waz | In the last version (12.1) if you enable TalkBack it's free. | 08:34 |
waz | (and also the 64-bit version is included) | 08:34 |
stekern | ah, ok | 08:35 |
stekern | perhaps time to upgrade ;) | 08:35 |
stekern | does the parallell compilation make a huge difference though? | 08:37 |
stekern | in ISE it's hardly noticable | 08:37 |
waz | I need to test it more, but I read in forums that about 20~35% | 08:37 |
waz | Not much really in old versions. | 08:38 |
waz | But perhaps it has been enhanced. | 08:38 |
stekern | ok, that's pretty much | 08:38 |
stekern | compilation times isn't unbearable as it is now anyways | 08:39 |
waz | At least the environment feels snappier (with fancy graphics included). | 08:39 |
stekern | oh, I don't use the gui much... ;) | 08:39 |
stekern | nothing is as snappy as a b&w console you know | 08:40 |
waz | For batch building, nothing better. | 08:40 |
waz | But I must say that Quartus is a quite simple IDE. Feels snappy in part because of that. | 08:42 |
stekern | yeah, I agree, it is good | 08:42 |
stekern | and they have fixed a very annoying "feature" they had in the older versions | 08:43 |
stekern | if you would press ctrl-x in the editor, it would cut the current line, even if nothing was selected | 08:43 |
waz | I've never seen that. | 08:44 |
stekern | you could imagine what happens when an emacs user is doing 'ctrl-x ctrl-s' by old habit | 08:44 |
stekern | well, it bit me a couple of times (I believe it was in version 9 something) | 08:45 |
waz | Pretty bad behavior. | 08:45 |
waz | But it doesn't compare with what happened in some previous version. The text editor went crazy. | 08:46 |
waz | The lines blended in front of you. | 08:47 |
waz | If you didn't scroll to refresh the drawn content and you saved the file, the lines blended, well, disappeared. | 08:49 |
stekern | heh, that's nice... | 08:49 |
stekern | I haven't figured out how to get a textual representation of the Worst-Case paths though | 08:50 |
waz | The timing closure recommendations? | 08:50 |
stekern | I mean the path reporting in the TimeQuest UI | 08:51 |
waz | Try exporting it as HTML | 08:52 |
stekern | http://oompa.chokladfabriken.org/tmp/timequest-ui.png | 08:54 |
stekern | wouldn't I need to do that in the ui? | 08:55 |
stekern | I'd like to avoid starting the ui all together | 08:55 |
waz | I think that it's more useful the "Report Timing Closure Recommendations". | 08:56 |
waz | Could you please export the "Long Combinational Path" report? | 08:57 |
waz | I didn't understand what you asked about needing to do it in the UI. | 08:58 |
stekern | I meant, to be able to export to HTML, don't I have to start the UI to do that? | 08:59 |
waz | No, just right click on the name of the generated report. | 09:00 |
waz | Export and select HTML file type. | 09:00 |
stekern | right clicking in the commandline isn't doing anything for me, am I'm doing it wrong? ;) | 09:02 |
stekern | I was probably not clear enough, I want to generate that report from the commandline | 09:03 |
waz | Oh, I can't help you much. I barely use the command line (for Nios). | 09:05 |
stekern | where's that "Long Combinational Path" report? | 09:06 |
stekern | I haven't seen that | 09:06 |
waz | In the Timequest UI, it is generated under the "Report Timing Closure Recommendations". | 09:08 |
waz | Thanks for the information, stekern. | 09:16 |
waz | I have to go. I hope to collaborate with something for the project in the near future. | 09:18 |
waz | Thanks again and bye. | 09:18 |
stekern | just when I was about to thank about the guidance about Timing Closure Recommendations | 09:23 |
stekern | oh, and tweaking the compilation settings upped the fmax to 89 MHz on mor1kx | 09:45 |
mor1kx | [mor1kx] skristiansson pushed 3 new commits to master: https://github.com/openrisc/mor1kx/compare/2021b2d04e22...b0f0adc4f704 | 13:08 |
mor1kx | mor1kx/master 2fb5c78 Stefan Kristiansson: Remove remains from when icache was located outside cpu | 13:08 |
mor1kx | mor1kx/master 7dae85b Stefan Kristiansson: cappuccino/ctrl_branch: connect pipeline_flush to imm_branch reset | 13:08 |
mor1kx | mor1kx/master b0f0adc Stefan Kristiansson: move dcache to lsu | 13:08 |
stekern | juliusb: I changed the mor1kx github description from 'mor1kx' to 'mor1kx - an OpenRISC 1000 processor IP core' | 13:16 |
stekern | because I realised that it's hard to get what it actually is when browsing pages like this: https://github.com/languages/Verilog/updated | 13:17 |
LoneTech | why is gc-sections architecture specific? | 14:05 |
stekern | LoneTech: in BFD? | 14:14 |
LoneTech | I assume so | 14:18 |
stekern | the only really architecture specific thing I have seen going on there is updating the got reference counting for the sections being removed | 14:19 |
stekern | why the or32 didn't support it, I don't know | 14:21 |
LoneTech | didn't? | 14:21 |
stekern | well, it still doesn't, but the or1k toolchain does | 14:22 |
stekern | and the openrisc target in bfd did as well | 14:22 |
LoneTech | I am clearly a bit out of date on how manu permutations of toolchains there are | 14:23 |
stekern | let me bring you up to date :) | 14:23 |
stekern | in ~2000 Johan Rydberg submitted a cgen generated openrisc target to "sourceware" | 14:24 |
stekern | in ~2001 Ivan Guzvinec submitted a or32 target to "sourceware" | 14:25 |
stekern | in ~2011 Julius Baxter merged the two into the or1k target | 14:26 |
stekern | and world order has (almost) been restored ;) | 14:26 |
LoneTech | thank you :) | 14:31 |
blueCmd | are there any patch-series for gcc 4.7 and binutils 2.23 ? | 16:18 |
blueCmd | also, have anybody made any real effort in porting glibc? | 16:18 |
stekern | blueCmd: no, but for gcc 4.8 and 2.22.52 | 16:23 |
stekern | was a while since it was synced against upstream | 16:23 |
stekern | to your second question, no | 16:23 |
blueCmd | stekern: ah, where can I find these patches? | 16:25 |
blueCmd | synced against upstream isn't the same as that they were accepted by upstream, right? | 16:26 |
stekern | https://github.com/openrisc/or1k-src | 16:26 |
stekern | https://github.com/openrisc/or1k-gcc | 16:27 |
blueCmd | oh, sweet. is this linked on the opencores-site? | 16:27 |
stekern | there are some dynamic linking patches that I haven't pushed there (I should clean them up) here too: | 16:27 |
stekern | https://github.com/skristiansson/or1k-gcc | 16:27 |
jeremybennett | blueCmd: There is a lot about the tool chains on the Wiki, including test results. | 16:28 |
blueCmd | jeremybennett: indeed, as somewhat of a newcomer it's quite hard to know what information is current | 16:28 |
stekern | I probably should just merge my changes and do a changelog right away | 16:29 |
stekern | and cleanup the few warts later | 16:29 |
blueCmd | jeremybennett: I found it under "Installation of development versions" | 16:30 |
blueCmd | so I guess it's there :) | 16:30 |
blueCmd | stekern: is there some kind of plan for merging these patches with upstream? | 16:44 |
stekern | the biggest problem with that is to get permission from all the people that have hacked on it over the years to assign the copyrights to fsf | 16:59 |
stekern | or at least that's my understanding of it | 16:59 |
blueCmd | Ah, I see. it would probably be really beneficial to openrisc though. | 17:08 |
blueCmd | stekern: may I bother you for what configure flags you use when bootstraping or1k-gcc? | 17:20 |
blueCmd | when it tries to build mno-delay/libgcc_s.so it wants to link with libc which I don't have one yet | 17:21 |
jeremybennett | stekern: You are right about the legal problems. | 17:22 |
jeremybennett | It is why I have always held off doing it. | 17:22 |
jeremybennett | We'd have to get assignment from all the people who had been involved. | 17:22 |
blueCmd | but that's a problem that won't go away, isn't it? | 17:23 |
jeremybennett | blueCmd: It's very useful to have feedback on the Wiki from new users. Please edit it to make it clearer in the light of your experience. | 17:23 |
jeremybennett | blueCmd: You are right, but to solve the problem requires a great deal of very tedious effort, and it is hard to justify spending that effort. | 17:24 |
blueCmd | jeremybennett: Yeah, I understand | 17:25 |
blueCmd | jeremybennett: I might just do that, there is often a lot of redundant and outdated information. I'm going to write an article (mainly for my own use) when I get everything running as I want it anyway so it will fit nicely | 17:27 |
stekern | blueCmd: http://pastie.org/5031284 | 17:28 |
jeremybennett | blueCmd: Thanks | 17:28 |
blueCmd | stekern: oh man, did I miss that somewhere? :( | 17:29 |
stekern | no, it's my own cheat-sheet | 17:29 |
blueCmd | ah, puh | 17:29 |
stekern | it has been pointed out that two make install steps are missing in that | 17:29 |
blueCmd | that seems correct yes, not that I would have read it that closely to have noticed it though :P | 17:31 |
stekern | http://pastie.org/5547801 | 17:32 |
stekern | that should have those | 17:33 |
blueCmd | stekern: jeremybennett: do you guys work with openrisc or what are your stories? | 17:49 |
jeremybennett | jeremybennett: I run an open source software company specializing in compiler tool chains for embedded systems | 17:55 |
stekern | blueCmd: I'm in it for the kicks ;) | 17:55 |
blueCmd | stekern: jeremybennett cool :) | 17:56 |
stekern | actually, I was supposed to use openrisc in a hobby project around two years ago, I kind of forgot about that hobby project and kept hacking on various openrisc projects instead... | 17:59 |
blueCmd | heh, that might as well be me in 2 years then :P | 18:01 |
stekern | yeah, watch out, it is an addictive drug ;) | 18:01 |
blueCmd | cool, gcc 4.8.0 just compiled linux 3.6.10 and it works. great! | 18:01 |
blueCmd | haha, this is great - my init-process runs! | 18:05 |
blueCmd | stekern: jeremybennett thanks for your help :) | 18:05 |
jeremybennett | blueCmd: You're welcome. | 18:23 |
poke53281 | Stekern: I have problems running shared library programs with this toolchain | 20:05 |
poke53281 | static: no problem | 20:05 |
poke53281 | but when I run my shared library hello world program I get the error message | 20:06 |
poke53281 | "/bin/sh: ./a.out: not found | 20:06 |
poke53281 | ldd with debug messages gives me the output | 20:06 |
poke53281 | # ./ldd a.out | 20:06 |
poke53281 | ldd: can't open cache '/etc/ld.so.cache' | 20:06 |
poke53281 | checking sub-depends for '/usr/lib/libc.so.0' | 20:06 |
poke53281 | argc=1 argv=0x7feaded4 envp=0x7feadedc | 20:06 |
poke53281 | ELF header=0x30000000 | 20:06 |
poke53281 | First Dynamic section entry=0x30009c8c | 20:06 |
poke53281 | Scanning DYNAMIC section | 20:06 |
poke53281 | Done scanning DYNAMIC section | 20:06 |
poke53281 | About to do library loader relocations | 20:06 |
poke53281 | Done relocating ldso; we can now use globals and make function calls! | 20:06 |
poke53281 | _dl_get_ready_to_run:446: Cool, ldso survived making function calls | 20:06 |
poke53281 | _dl_get_ready_to_run:625: Position Independent Executable: app_tpnt->loadaddr=0x | 20:06 |
poke53281 | 8000000 | 20:06 |
poke53281 | _dl_malloc:236: mmapping more memory | 20:06 |
poke53281 | So, in the end the program finds alls libraries. But still the same problem. | 20:07 |
poke53281 | Sorry for spamming the chat :) | 20:07 |
poke53281 | Normally I would track down the problem with strace. But unfortunately it is not supported. | 20:14 |
Erant | Right now it's kinda hacky, but I got minsoc to run on an atlys board (*** Self-test PASSED ***, yay), who do I talk to about getting that port into mainline? | 20:35 |
poke53281 | @stekern: Found the problem. The compiled program tries to find the program interpreter in /usr/lib/ld.so.1 . But uClibc don't use this file. A symlink to /lib/ld-uClibc.so.0 solves the problem. | 21:00 |
poke53281 | So, either there is an error in gcc or uClibc. | 21:01 |
poke53281 | "readelf -l program_name" shows the link to the program interpreter. You can try it in your toolchain | 21:02 |
-!- X-Scale is now known as Guest116 | 21:22 | |
-!- X-Scale` is now known as X-Scale | 21:23 | |
stekern | poke53281: it's actually in bfd, it's hardcoded to /usr/lib/ld.so.1 | 21:30 |
stekern | I was suppose to change that, but obviously I've forgot about it | 21:31 |
stekern | ah, I see that peter gavin has merged my changes and synced with mainline | 21:41 |
stekern | maybe we should just push that to github/openrisc | 21:41 |
poke53281 | Ok, I will clone and try again tomorrow. | 21:44 |
poke53281 | Next Problem :) | 21:44 |
poke53281 | can't resolve symbol '__udiv13' in lib '/usr/lib/libc.so.0' | 21:44 |
poke53281 | This is a simple hello world program. Nothing fancy. | 21:45 |
poke53281 | Had no time so far to figure out the problem. Maybe you have a short answer. | 21:47 |
poke53281 | can't resolve symbol '__udivsi3' in lib '/usr/lib/libc.so.0'. | 21:47 |
poke53281 | This is the correct error message | 21:48 |
stekern | yeah, I thought the udiv13 was weird | 21:48 |
stekern | __udivsi3, that should be in libgcc | 21:48 |
poke53281 | http://pastie.org/5548971 | 21:55 |
poke53281 | The whole output | 21:55 |
stekern | hmm, can't say I can see why that is happening from the top of my head | 22:27 |
blueCmd | Erant: is it http://www.digilentinc.com/Products/Detail.cfm?NavPath=2,400,836&Prod=ATLYS&CFID=131907&CFTOKEN=13787544 ? | 23:34 |
Erant | Yah | 23:35 |
Erant | poke53281: Can you nm that libc? See what symbols it exports? | 23:36 |
blueCmd | Erant: that's just weird, my plan was to get some SoC running on that board in ~february or something - I would love to take a look at your patches | 23:39 |
blueCmd | not that I can help you or anything, just curious | 23:39 |
Erant | blueCmd: They're quite simple. And I'm not using the internal JTAG scan chain | 23:41 |
Erant | I'm using the adv_sys_debug TAP brought out to the Pmod connector | 23:41 |
Erant | (And then an FT2232 dongle) | 23:41 |
blueCmd | I haven't looked at the board yet, I will borrow it from a friend around february, so I don't know anything about the internals of it yet | 23:42 |
Erant | Right now I'm trying to get physical synthesis to be faster. There's something wrong here... | 23:42 |
poke53281 | @stekern: http://pastebin.com/iGsvX4tZ | 23:59 |
Generated by irclog2html.py 2.15.2 by Marius Gedminas - find it at mg.pov.lt!