--- Log opened Thu May 08 00:00:39 2014 | ||
stekern | note to self - we should probably do something like this in gdb: https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=9404b58f46328b3b171b0d5eeb0691bd685bc4f5 | 05:31 |
---|---|---|
stekern | blueCmd: https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=999b995ddc4a8a2f146ebf9a46c9924c6a7c65a6 | 06:18 |
pgavin | I like the new or1k-tests suite | 07:04 |
pgavin | perhaps we should integrate it into the or1ksim tree? | 07:04 |
pgavin | I guess separate is ok... but it would be cool to distribute it | 07:04 |
pgavin | could be kept in sync with git-subtree | 07:05 |
stekern | pgavin: my idea is to pull in the or1ksim testsuite + your m4 based stuff | 07:05 |
pgavin | hah, really? | 07:05 |
pgavin | cool :) | 07:05 |
pgavin | I added a few more tests | 07:05 |
pgavin | but it still doesn't have very good coverage IMO | 07:06 |
stekern | hence the 'native' name of the tests that are "dumped 'natively' into the or1k-tests repo" | 07:06 |
pgavin | ok | 07:06 |
pgavin | I'm ok with deprecating the old tree and just merging | 07:06 |
pgavin | btw | 07:06 |
pgavin | since AFAIK I'm the only one using it | 07:07 |
stekern | ok, that works for me too | 07:07 |
stekern | or1ksim is probably best to keep 'external' though | 07:07 |
pgavin | what I had in mind was (something I'm doing now) is making modifications to the testsuite tree, then pulling them into my project using git subtree. it means I can reuse the test framework for implementation specific tests without polluting the main testsuite | 07:10 |
pgavin | but I suppose or1ksim doesn't have anything "implementation specific" | 07:10 |
pgavin | except the l.nop hacks :) | 07:10 |
stekern | yeah, I don't think it does | 07:11 |
stekern | ..but, I think in or1k-tests, it could be fine to dump implementation specific tests in there too | 07:11 |
pgavin | ok | 07:12 |
pgavin | oh. so, I was looking at some of the code gcc produces | 07:12 |
pgavin | and I saw quite a few load-uses that could be pushed after another instruction | 07:12 |
stekern | in the current proof-of-concept, my idea was that you compile implementation-specific test-lists, and then use those to test different implementations | 07:13 |
pgavin | since a load-use immediately after the load often causes a bubble, I was going to look into fixing that | 07:13 |
pgavin | ah, ok | 07:13 |
pgavin | I like the tests list idea | 07:13 |
stekern | (load/bubble) ok, cool. another thing that would be interesting to look at (and that I have been meaning to do some time), would be to move the l.sfxxx's when possible | 07:14 |
pgavin | I figure gcc should be able to do all that stuff automatically, we just have to figure out how | 07:14 |
pgavin | give it more info about dependencies and cycle times I guess | 07:15 |
pgavin | well, scratch that, it should already know pretty well about dependencies | 07:15 |
pgavin | does the mor1kx also have a bubble in that case? | 07:15 |
stekern | not anymore, now it branch predicts instead. but it would be possible to solve the critical path from l.sfxxx with a bubble and let the compiler be smart about it instead | 07:17 |
pgavin | ok | 07:17 |
pgavin | I haven't tested the timing but my core forwards the f flag, so there's no bubble | 07:18 |
pgavin | I hope it's not too tight :) | 07:18 |
stekern | (and, if the compiler *is* smart enough about it, it might make sense to weight in the current flag to the branch prediction) | 07:18 |
pgavin | hmm. you mean sending the current f value to the branch predictor along with the PC? | 07:19 |
stekern | yes, which would be the 'old stored' value | 07:20 |
pgavin | interesting idea. wouldn't you need to track whether the branch is bf or bnf to make use of it? | 07:20 |
pgavin | right | 07:20 |
stekern | ah, well, I need to in the current branch predictor too. It's a simple static forward-branches not taken, backwards taken predictor. | 07:21 |
pgavin | ok | 07:22 |
stekern | anyways, I pushed this yesterday, which kinda reverts one of your old commits: https://github.com/openrisc/or1k-gcc/commit/bdd3ad496930c61218ea683b9fd3dbcc093b9a14 | 07:22 |
stekern | do you think you could take it for a ride with the nd implementations? | 07:22 |
pgavin | sure | 07:23 |
pgavin | thanks for doing that :) | 07:23 |
pgavin | I didn't think about PIC when I did that | 07:24 |
stekern | I got tired explaining to people why we can't compile the Linux kernel with the or1k-linux- toolchains. So, now we can ;) | 07:24 |
pgavin | I just looked at the code GCC generated and it looked pretty well | 07:24 |
pgavin | lol :) | 07:24 |
stekern | yeah, I wouldn't have bothered changing your stuff if our Linux port wouldn't link with libgcc.a | 07:25 |
pgavin | is there a way to use the -linux toolchain with newlib? | 07:26 |
pgavin | that would mean I don't have to keep both around :) | 07:26 |
pgavin | I don't think I've even tried it though | 07:26 |
stekern | maybe there is, but I guess nothing will work 'out-of-the-box', since a lot of flags and stuff are assumed according to which toolchain is used | 07:31 |
pgavin | yeah, that's what I was thinking | 07:32 |
olofk | pgavin: Are you interested in joining us at the OpenRISC conference in Munich (probably October 11-12)? | 07:58 |
pgavin | I'm interested | 08:00 |
pgavin | but I'm in a weird stop right now | 08:00 |
pgavin | I'm supposed to graduate in ~4 months and I don't know where I'm going to work | 08:00 |
pgavin | so I guess I won't know until right beforehand | 08:00 |
pgavin | speaking of which... anyone know of any openrisc companies that are hiring? :) | 08:01 |
olofk | pgavin: Yeah, I can understand that. Just realized that I had forgotten to add you to a mail that I sent | 08:02 |
pgavin | I think I received one through a list at one point | 08:03 |
olofk | pgavin: I think unfortunately that everyone in here is looking for an OpenRISC company that is hiring :) | 08:03 |
pgavin | lol | 08:03 |
pgavin | lets just start one | 08:03 |
pgavin | put it on kickstarter | 08:04 |
olofk | Ohh that's a great idea! | 08:04 |
pgavin | it's risky | 08:05 |
olofk | Or we could ask the OpenCores maintainers to crowdfund an ASIC | 08:05 |
pgavin | but I think it could be fun | 08:05 |
pgavin | I was thinking a rasperry-pi like board based on openrisc | 08:06 |
pgavin | with all opencores based chips | 08:06 |
olofk | pgavin: But if you're thinking about returning to Gothenburg, I'm sure that the company I'm working for will be interested in someone with your skills. We try to only hire the best people and have a lot of people with PhD in different areas | 08:10 |
pgavin | are you working at gaisler? | 08:11 |
olofk | pgavin: No. Qamcom Research & Technology. But we have recently hired some guys from Gaisler | 08:12 |
pgavin | ok | 08:12 |
pgavin | I really really liked gothenburg | 08:13 |
olofk | Because of the weather? :) | 08:13 |
pgavin | I suppose :) | 08:13 |
pgavin | I have family in ulricehamn too | 08:13 |
pgavin | but my fiancee I think would prefer a slightly warmer climate lol | 08:13 |
pgavin | she's a horse trainer and works outside | 08:14 |
olofk | Totally understandable :) | 08:14 |
pgavin | I considered looking at what was available in sophia antipolis, but that means she has to learn french. it's a bit easier to get away with only knowing english in sweden, and just picking up swedish over time | 08:16 |
pgavin | but not possible in france :) | 08:16 |
olofk | Anyway, you have my e-mail address, so if you would like to know more, just give me a mail | 08:16 |
jeremy_bennett | pgavin: I like the Kickstarter idea. But what would make it distinctive. | 08:16 |
pgavin | ok. thanks, I'll keep it in mind | 08:16 |
jeremy_bennett | I think it's hard to make an ASIC in those volumes at a price that works for Kickstarter/Raspberry Pi | 08:17 |
olofk | Yes. We're quite anglophilic here :) | 08:17 |
pgavin | jeremy_bennett: just play up the hard-coded back doors that are possible in closed designs :) | 08:17 |
pgavin | jeremy_bennett: ok | 08:17 |
pgavin | I know nothing about fab volumes and costs | 08:17 |
jeremy_bennett | Is that enough to persuade backers. I quite like the idea of an FPGA board, but more like mbed, so better for control purposes (and with the advantages you highlight). | 08:18 |
jeremy_bennett | One weakness of the Raspberry Pi is that it is all 3.3V and no protection. So it is very easy for a school kid to break it. So everyone has to pay extra for an I2C interface board. | 08:19 |
olofk | If we want an ASIC, we could talk to Richard Herveille perhaps and see if easic can help us. Their process is probably a lot more affordable than a full custom ASIC | 08:19 |
jeremy_bennett | One of the strengths by comparison of Arduino is that if you wire everything up wrong, generally the chip does not break! | 08:19 |
olofk | jeremy_bennett: And another thing is that with an FPGA we can offer 20 i2c ports and 35 SPI ports if we want | 08:20 |
jeremy_bennett | So if you could make an FPGA OpenRISC mbed, with decent protection on the signals that would be a generically very attractive product. | 08:20 |
olofk | Has anyone looked at Arduissimo? He started his kickstart campaign on indiegogo just a few days ago | 08:20 |
jeremy_bennett | Even for non-OpenRISC people. | 08:21 |
jeremy_bennett | olofk: Good idea about asking Richard | 08:21 |
olofk | jeremy_bennett: I'm also intersted in the other side of the spectrum. A high-performance FPGA with high speed I/O so we acn properly do things like DVI/HDMI, USB2/3, SATA, PCI Express. The Open Source IP is falling behind in the fast I/O area | 08:22 |
olofk | And I think that would attract a lot of people, since roughly 50% of everyone coming in here is looking for some sort of Open Source computer | 08:22 |
olofk | I mean, it would be enough to clock the thing at 200MHz (I got mor1kx running at that speed in a Virtex-6 device) | 08:23 |
olofk | without caches, but it's probably optimizable | 08:23 |
olofk | jeremy_bennett: Isn't there already a lot of low-end FPGA boards out there, like de0_nano and papilio one? Are you looking at lowering the costs more than those, or are you specifiaclly looking for the I/O protection? | 08:25 |
pgavin | olofk: what is the memory access latency like without caches? | 08:25 |
olofk | pgavin: Depends very much on the memory technology, but with DDR2, I'd say about 40-50 cycles | 08:29 |
olofk | On the first access | 08:29 |
pgavin | ok. so it bursts | 08:29 |
olofk | Those thigns like to be read block-wise | 08:29 |
pgavin | right | 08:29 |
olofk | And the trend will probably be trading latency for higher bandwidth. High speed serial memories are starting to appear now | 08:30 |
pgavin | well even if the caches reduce the clock rate by half they'll probably be beneficial | 08:30 |
stekern | (competing with el-chepo fpga-boards) darn hard to do, given that a lot of the cheapness come from them being sponsored | 08:30 |
olofk | stekern: I agree | 08:30 |
olofk | I think the ordb2a was sold at basically self-cost, and that was ~€140 | 08:31 |
olofk | Same FPGA as the de0 nano, but with onboard ethernet and a USB connector | 08:31 |
pgavin | has anyone ported to the de2-115 yet? | 08:33 |
olofk | pgavin: https://github.com/openrisc/orpsoc-cores/pull/38 | 08:33 |
olofk | oh.. sorry. That was a DE2-70 | 08:34 |
pgavin | well | 08:34 |
pgavin | I have one, I can try it out | 08:34 |
pgavin | will be good practice | 08:34 |
jeremy_bennett | olofk: It would be good to pull the costs lower. The DE0-Nano costs around $80-90, yet Adapteva can put a Zync and an Epiphany on a $99 board. | 08:34 |
jeremy_bennett | So I think you could build an FPGA board capable of running OpenRISC for < $50. | 08:35 |
olofk | I would like a board with high speed I/O (like I mentioned above) and the baddest FPGA that the free tools support | 08:35 |
stekern | yeah, me too | 08:35 |
olofk | jeremy_bennett: You're probably right that we could cut costs a bit more, but I'm not sure that the Zync is all that expensive otoh. | 08:36 |
stekern | jeremy_bennett: sure, you probably could. but it would most likely be at self-cost or even making loss prices. | 08:36 |
olofk | And I'm not sure that Adapteva finished on the plus side with that prices | 08:36 |
stekern | I mean, the parallella had other agendas than making profit from that board, right? | 08:37 |
jeremy_bennett | But I think making it specifically a microcontroller type board with good protection (so like mbed, but with good protection) would be a valuable proposition. | 08:37 |
jeremy_bennett | You really need a low-cost design & manufacturing expert to do it. | 08:37 |
jeremy_bennett | The other thing would be to get a good dev environment to go with it. The reason mbed works is the comprehensive programming and library environment. | 08:37 |
stekern | ...which of course we would have as well, make OpenRISC wide-spread... but you get my point | 08:37 |
olofk | How much is an mbed? | 08:38 |
jeremy_bennett | olofk: Around $40 - its a Cortex M0 or M3 (two flavours) | 08:38 |
jeremy_bennett | stekern: Fair point | 08:39 |
olofk | jeremy_bennett: Have you considered the papilio one? Sounds like it's in the right price and hardware range | 08:41 |
olofk | http://papilio.cc/index.php?n=Papilio.Buy | 08:41 |
olofk | Haven't explicitly looked for I/O protection though, and I'm not sure what the SW infrastructure looks like | 08:42 |
olofk | And apparently Rob Riglar (the AltOR32 guy) has ported his CPU to the papilio 250k | 08:43 |
olofk | So if we're going for a Xilinx FPGA I suggest a Virtex-6 (XC6VLX75T), Kintex-7 (XC7K160T) or Artix-7 (XC7A200T). All have plenty of Multi Gigabit tranceivers and are supported by webpack | 09:00 |
olofk | I'm not as familiar with Altera's options, but they should probably have something similar | 09:01 |
stekern | olofk: did you make any progress with or1k-lsu and or1200 yesterday? | 09:28 |
olofk | stekern: Nope. I tried to read through the div code, but couldn't find anything obvious. It's confirmed to be in the serial divider at least | 09:37 |
olofk | But isn't that the default one? Sounds weird that no one has noticed | 09:37 |
olofk | Oh. and I added craploads of nops between each instruction in the testcase, but that made no difference | 09:38 |
olofk | wallento: How scalable is this multicore thingamob? Is there any added complexity going from two to three cores? | 09:38 |
olofk | juliusb: What can you tell us about the or1200 divider? Any known issues? | 09:43 |
olofk | ah yes. I thing I could confirm that the div gets the correct nominator and denominator (is that what they are called? The numbers above and below the line) | 09:44 |
olofk | Täljare och nämnare | 09:45 |
stekern | isn't it numerator/denominator | 09:51 |
olofk | You're probably right. I thought something sounded off with nominator | 09:52 |
olofk | Stupid hardware divider. I can't remember how those things are supposed to work. We should have offloaded heavy calculations to the cloud instead | 09:53 |
stekern | they should just be called "above and below operators", why have fancy words for stuff? | 09:54 |
stekern | I've always had troubles with the "fancy words" and using right terminology | 09:56 |
olofk | Fancy words are for dumb people | 09:57 |
stekern | as my first steps in mcu programming I implemented something I called "blinking with a led at different intervals to make it glow with different intensity" and at the same time claiming I didn't know how to implement fancy stuff like PWM ;) | 09:57 |
rah | http://www.easic.com/low-cost-power-fpga-nre-asic-90nm-easic-nextreme/easic-nextreme-overview/ | 09:57 |
olofk | haha | 09:57 |
rah | 790 I/Os | 09:58 |
rah | phwoar | 09:58 |
olofk | rah: Not laughing at your easic ;) | 09:58 |
rah | :-) | 09:58 |
rah | I'm glad to hear that :-) | 09:59 |
rah | http://www.easic.com/Spatr7ve/website-wp1/wp-content/uploads/2011/07/Managing-Risk-and-Cost-Nextreme-NEW-ASIC-to-Standard-Cell-ASIC-Migration-v056.jpg | 09:59 |
olofk | Apparently Altera dropped their Hard Copy program (I heard it wasn't beneficial for them) so easic is the only company I know of now in the cheap almost-ASIC market | 09:59 |
olofk | rah: Hmm... most things look good, but I'm missing some multi gigabit transceievers | 10:02 |
olofk | 800MHz LVDS is the fastest I could see | 10:02 |
olofk | Should probably check for external transceivers though. Might be a better idea | 10:02 |
rah | olofk: what do you need multi gigabit tranceivers for, just out of curiosity? | 10:06 |
olofk | rah: usb, sata, pci express, hdmi/dvi | 10:10 |
rah | I see | 10:10 |
olofk | well, usb2 would probably work, and low resolution hdmi | 10:11 |
rah | they have numbers there of "1-10K" units | 10:13 |
rah | I wonder how much it would cost to produce 1000 orpsocs using their "NEW ASIC" | 10:14 |
olofk | I'm interested in that too. Problem is that it's probably pretty hard to get a quote | 10:15 |
rah | aye | 10:15 |
rah | https://bitcointalk.org/index.php?topic=68682.0 | 10:16 |
rah | it doesn't look good | 10:16 |
rah | "including the tool chain, you are looking at approximately $3000 or $4000 per chip" | 10:16 |
rah | ouch | 10:17 |
olofk | What quantities? | 10:18 |
rah | very low; 45 chips | 10:18 |
rah | a handful | 10:18 |
stekern | that sounds pretty cheap | 10:19 |
rah | O_o | 10:20 |
rah | I'm one of the 50% of people who come in here looking for an Open Source computer and there's no way I could spend $4000 on just the chip | 10:23 |
rah | or $3000 | 10:24 |
olofk | That sounds very cheap | 10:24 |
olofk | That's $135000-180000 | 10:25 |
pgavin | so we get 50 people with $3000 each :b | 10:25 |
pgavin | but you still gotta build the board | 10:26 |
pgavin | what would that cost | 10:26 |
olofk | ...ok very cheap might be exaggerating :) | 10:26 |
olofk | pgavin: I say board and chip are two different things | 10:27 |
olofk | The idea with ORSoC's ordb2a board was to make an FPGA board that could be populated with an ASIC later on | 10:27 |
pgavin | ah | 10:27 |
LoneTech | a lot of jumbled text there. is that for a 90nm structured asic? if it's not an urgent one-off I'd think a mosis process design using electric would be more attractive | 10:27 |
olofk | I think we can forget phase two | 10:27 |
pgavin | but you need to make the chip socketable then? | 10:27 |
olofk | pgavin: No, but you can reuse most of the PCB design | 10:28 |
pgavin | ok | 10:28 |
pgavin | so you'd still sell the chip and the board together already assembled | 10:28 |
olofk | I guess so. But with the option to buy stand-alone chips | 10:29 |
pgavin | but would an individual want to buy a single chip? doesn't the soldering need to be automated? | 10:30 |
olofk | I think that the market for those things won't be companies who buy ASICs and put them on their own board | 10:30 |
olofk | Exactly | 10:30 |
olofk | Having the option won't hurt, but I'm not sure it's a good sell. You would probably be much better of with a cheap-ass ARM SoC in that case | 10:30 |
olofk | I think board first, ASIC later is the way to go | 10:31 |
olofk | board with FPGA I mean | 10:31 |
LoneTech | speaking of cheap-ass, the psoc chips have really moved into that region | 10:31 |
olofk | LoneTech: Do you mean the Cypress psoc, or programmable SoCs in general? | 10:31 |
LoneTech | cypress | 10:32 |
olofk | Yeah. They look very interesting, but I haven't had the opportunity to look closer at them | 10:32 |
rah | http://www.easic.com/Spatr7ve/website-wp1/wp-content/uploads/2011/02/eASIC-Nextreme-Product-Brief.pdf | 10:32 |
LoneTech | the sad part is that the reroutable section only has windows tools | 10:32 |
rah | they only do TQFP and *BGA packages for the chips | 10:33 |
olofk | LoneTech: What? In 2014? How come some companies never learn? | 10:33 |
olofk | I'll be all for looking at crowdfunding a high-end FPGA board. Anyone else interested? :) | 10:35 |
rah | olofk: yes :-) | 10:35 |
LoneTech | a bit interested, but honestly I think the Parallella could keep me occupied on that front for a while | 10:36 |
olofk | LoneTech: Me too probably. I have more boards than I have time to play with | 10:36 |
LoneTech | it's a crowdfunded $99 Zynq board, with a crazy powerful parallel processor added on (which it was designed to showcase, but hey, cheap zynq!) | 10:36 |
olofk | But they all miss high-speed transceivers | 10:36 |
LoneTech | what level of high speed? it certainly has >Gb | 10:37 |
olofk | LoneTech: Yes. I bought it partly for the Zync as well :) | 10:37 |
olofk | LoneTech: Are you sure about that? | 10:37 |
rah | how can you reuse an FPGA board with an ASIC? | 10:38 |
olofk | The Samtec connectors are only a few hundred Mb/s, right? | 10:38 |
rah | surely you'd have to at least move the pads for the chip? | 10:38 |
rah | unless they come in identical packages? | 10:38 |
olofk | rah: You redesign the parts closest the the FPGA. | 10:38 |
rah | ok | 10:38 |
rah | so it'll still require some adjustments | 10:39 |
LoneTech | hm. the feature table for the zynq indicates this one doesn't have the transceivers | 10:39 |
rah | high-end FPGA board with high-speed transceivers, intended to be reused with an ASIC OpenRISC chip, I'm in! :-) | 10:40 |
LoneTech | and confusing numbers too (suggesting 1.5Gbps transceivers) | 10:40 |
rah | olofk: get on it! :-) | 10:40 |
LoneTech | if I recall correctly, and I might have been wrong, even lacking that block it can do some silly speeds.. but the connector might set the limits | 10:41 |
olofk | LoneTech: The LVDS I/O are pretty fast, but I don't think they're >Gb/a | 10:49 |
olofk | rah: I can make a wish list and we leave the PCB designing to someone else | 10:51 |
olofk | ah.. Gb/s I mean. | 10:51 |
LoneTech | right. looking. first thing is that there are serdeses in the new selectio blocks, even when they're not named gigabit transceiver | 10:52 |
LoneTech | okay, the slowest speed grade can do 950Mb/s (table 49, Zinq Z-7020 DC/AC switching characteristics) | 10:54 |
olofk | LoneTech: Wow. That's still pretty fast | 10:54 |
LoneTech | so I was off, but it's still rather impressive compared to other generations | 10:54 |
rah | olofk: that would be a start | 10:54 |
rah | olofk: put it on the wiki with a call for PCB designers? | 10:55 |
LoneTech | document DS187 | 10:55 |
olofk | The should have pushed it 50Mb/s faster and the could have called it a Gigabit tranceiver :) | 10:55 |
LoneTech | actually, the higher speed grades (-2, -3) do 1250 | 10:55 |
olofk | LoneTech: Which speed grade is on the parallella? | 10:55 |
LoneTech | 1C, so it's the slower kind per spec | 10:56 |
* rah wonders again if an EE degree might have been better than CS | 10:58 | |
rah | oh well | 10:58 |
LoneTech | for comparison, the GTP transceivers we don't get do 3.75Gb/s on this speed grade, 6.25 on the faster one | 10:58 |
olofk | rah: I started out in CS and took as many EE courses as I could, and then switched to EE and took as many CS courses as I could. This is probably somewhere in between :) | 10:59 |
rah | hah :-) | 11:00 |
rah | olofk: I was hard core and took only CS course, including for free modules where I could take any course offered by the university | 11:01 |
rah | in retrospect, some EE courses might have been a better choice :-) | 11:01 |
-!- Netsplit *.net <-> *.split quits: chad__ | 11:23 | |
-!- Netsplit *.net <-> *.split quits: hno` | 12:04 | |
LoneTech | fwiw, the parallella manual states the PEC_FPGA connector can support 22.8Gbps (which works out to 950Mbps in 24 pairs) | 12:05 |
LoneTech | the connector is rated for much higher speeds | 12:09 |
-!- Netsplit over, joins: hno` | 12:18 | |
blueCmd | stekern: woo! was it scary to do the git push? ;) | 16:32 |
stekern | blueCmd: a little ;) | 17:01 |
stekern | but there's a 'git sucks' commit in there that adds a missing file. that made it a lot less scary | 17:02 |
mohessaid | hello, I found something out, I tried to compile a simple hello world program with all the toolchains and I find that niether the uClibc nor the glibc toolchain can produce a program that run on openrisc. the result of both toolchains print this message in linux (in simulation) /bin/sh: hello : not found. and the only toolchain that produce a running program is the prefixed by or32-linux- built from gnu-stable trunk. of course | 18:11 |
mohessaid | of course I mean the or1k-uClibc and or1k-linux-gnu | 18:12 |
mohessaid | what do you think the problem is? | 18:13 |
stekern | ah, parallella board arrived now | 18:15 |
stekern | mohessaid: where's your ld.so? | 18:17 |
mohessaid | ld ? | 18:18 |
stekern | yes, it should be /lib/ld.so.1 with glibc | 18:19 |
stekern | and /lib/ld-uClibc.so.0 in uClibc | 18:21 |
stekern | (which is a symlink to the actual file) | 18:21 |
dalias | and once the musl port is done, /lib/ld-musl-or1k.so.1 (or whatever the $ARCH you prefer ends up being :) | 18:22 |
stekern | dalias: =) | 18:24 |
stekern | dalias: have there been any more discussion about the deprecated syscalls btw? | 18:26 |
dalias | not much. some quick checks seemed to suggest open is the only one that needs nontrivial handling (because SYS_open is used directly in several places) | 18:28 |
dalias | the rest can probably just be #ifdef SYS_oldwhatever / use it / #else / emulate with new / #endif | 18:29 |
dalias | since they're only used in one place, emulating the corresponding syscall wrapper | 18:29 |
dalias | erm | 18:29 |
dalias | implementing | 18:29 |
dalias | my brain si fried today | 18:29 |
stekern | that, and SYS_poll in __init_libc | 18:30 |
mohessaid | this is my ld.so.1 http://goo.gl/PgqCNT | 18:33 |
stekern | mohessaid: I asked where it was (i.e. do you have it on your rootfs) | 18:34 |
mohessaid | it is under /opt/openrisc-devel/or1k-linux-gnu/sys-root/lib | 18:36 |
stekern | ok, you need to have it on your rootfs | 18:37 |
stekern | (or link your application statically) | 18:37 |
_franck_ | mohessaid: you can also compile your program with --static | 18:39 |
mohessaid | stekern: what do you say about the ld.so.l | 18:44 |
stekern | mohessaid: that you need to have it on your rootfs, do you? | 18:56 |
blueCmd | make install_root=/my/nice/initramfs install | 19:07 |
blueCmd | IIRC | 19:07 |
dalias | stekern, yes | 19:32 |
olofk | stekern, blueCmd : I'm writing a few lines about the atomic operations now. How did it work before? I mean, we supported threads before this, right? Or is that unrelated? | 19:39 |
stekern | olofk: first, where are you writing those lines you are constantly referring to? | 19:40 |
stekern | to answer the question, it was handled by issuing a (or1k specific) syscall from userspace | 19:41 |
olofk | stekern: My blog (think something like kernelnewbies.org/LinuxChanges) | 19:41 |
stekern | heh, it took a year for the parallella to arrive, I still wasn't prepared for it (I'm missing micro-hdmi and micro-sdcard) | 19:43 |
olofk | haha | 19:45 |
olofk | stekern, blueCmd: You got mail | 19:45 |
olofk | I should probably contact DHL tomorrow. They tried to deliver monday, but I haven't got a mail, SMS, a note or anything to indicate what they will do next | 19:45 |
stekern | that's the normal DHL style... | 19:46 |
olofk | So what's the normal way to handle it? | 19:47 |
stekern | they seem to have realised that it's not working here, so they let the normal post carrier do the delivery | 19:47 |
stekern | which meant I had to go pick it up at the "post-office", but that's better than going out to the DHL office by the airport | 19:48 |
olofk | I'm ok with picking it up at my post-office... as long as they give me a fucking recipe I can use to pick it up | 19:49 |
olofk | s/recipe/receipt | 19:49 |
stekern | usually they drop a note in your mailbox that says "we tried to deliver a package to you, please contact us so we can arrange a time that suits both of us for us to deliver it to you (or pick it up from our office)" | 19:50 |
stekern | the only problem I have experienced is that the "arrange a time that suits both of us" actually just means "a time that suits us" | 19:51 |
olofk | Mm.. but it looks like they didn't. One problem is that I've moved since I ordered the board almost two years ago. So I'm not sure if they have tried to drop it at my old apartment | 19:52 |
olofk | Oh well. I'll find out tomorrow | 19:52 |
blueCmd | olofk: as stekern said: system call that disabled interrupts, did the thing and returned to user space | 19:53 |
blueCmd | I think "ls" did about 10k of those on a normal run | 19:53 |
blueCmd | it was _quite_ slow, but worked | 19:53 |
olofk | Interesting. Can you provide some more estimates on the number of instructions required before and after... if that's deterministicish | 20:00 |
blueCmd | olofk: number 1 is that it doesn't require the context switch to kernel mode and interrupts to be disabled | 20:01 |
blueCmd | and that passing an invalid pointer doesn't crash the kernel, but that's just lazyness :) | 20:02 |
olofk | As long as we can just set the supervisor bit from user mode, I don't think that's a very big deal :) | 20:03 |
blueCmd | olofk: you can ask stekern to do 'strace ls 2>&1 | grep or1k_atomic -c' and with '... | grep or1k_atomic -v -c' for the latest numbers | 20:03 |
olofk | stekern: Can you do 'strace ls 2>&1 | grep or1k_atomic -c' and with '... | grep or1k_atomic -v -c' for the latest numbers | 20:03 |
olofk | ? | 20:03 |
blueCmd | I was like 'that's very close to what I wrote!' | 20:04 |
blueCmd | olofk: I forgot how l.sys works, maybe it's low overhead | 20:05 |
olofk | :) | 20:05 |
olofk | Is this a pure linux thing, or is it in gcc/binutils as well? | 20:06 |
blueCmd | it's not in binutils, it's currently not in gcc (but will be,http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html) | 20:07 |
blueCmd | it is in glibc | 20:07 |
olofk | aha. So it needs to go into the other C libraries as well if we want to use other C libs? Or is the plan to move them to GCC? | 20:09 |
blueCmd | olofk: yes, it needs to be in other libcs | 20:10 |
blueCmd | unless you want them to depend on gcc specific extensions | 20:10 |
blueCmd | for glibc that's fine, but for musl not so much | 20:10 |
blueCmd | and for uClibc I don't think anyone cares about atomic operations (I don't think it has any currently) | 20:11 |
olofk | ah ok | 20:12 |
olofk | I should probably point my blog to juliusbaxter.net/openrisc-irc instead :) | 20:12 |
ams | it is the best blog around! | 20:13 |
blueCmd | haha | 20:13 |
* ams is fighting x11, pthreads, and crazy mmap switches all at the same time! | 20:13 | |
olofk | ams: Sounds lovely | 20:14 |
ams | it isn't. | 20:14 |
ams | did i say that it is 20 year old code? | 20:14 |
blueCmd | why fight it? | 20:14 |
ams | blueCmd: because otherwise i will be destroyed | 20:15 |
ams | and die, a painful death ... or be tortured for infinite | 20:16 |
ams | time | 20:16 |
olofk | Any other problem with the syscall approach other than it is slow? Does it work on multi core for example? | 20:17 |
stekern | it doesn't | 20:17 |
blueCmd | olofk: probably not | 20:17 |
blueCmd | doesn't do cache snooping et.al | 20:17 |
olofk | Does the lwa/swa help with that somehow? Or do both approaches work with proper cache coherency handling? | 20:18 |
olofk | I mean, the atomic operations don't magically solve the cache issue, right? | 20:19 |
blueCmd | well, they kind of do | 20:19 |
blueCmd | you can have a swa on the other CPU invalidate the lwa on the first CPU | 20:19 |
blueCmd | if I understood the concepts correctly | 20:19 |
blueCmd | and that is done by snooping the accesses | 20:20 |
stekern | the syscall method doesn't work, because other cores can access the memory. So it's not completely related to cache coherency, that's another problem | 20:20 |
olofk | ams: Let me guess. You are on a burning train that has lost control and is heading over a cliff. And the only way to stop it is to remotely log in to the control room and start an X application, but it turns out the application is threaded and uses mmap. Am I far off? | 20:21 |
stekern | the only thing that's common with the cache coherency and atomicity between cores, is that you snoop addresses to determine if someone has accessed the memory area in question | 20:22 |
ams | olofk: quite, sounds like a painless death | 20:22 |
olofk | blueCmd, stekern: Thanks. That will give me enough pointer to write something down | 20:22 |
olofk | ams: Did I mention that the train moves very slowly and that the cliff is only a few meters high? | 20:22 |
ams | olofk: sounds stil like a nice death | 20:23 |
stekern | root@or1k-debian:~# strace ls 2>&1 | grep or1k_atomic -c | 20:24 |
stekern | 1559 | 20:24 |
ams | olofk: consider hacking on x11r4 code, which was written 20 years ago, which uses old old pthreads that have been changed, and on a gnu/linux box that has a bunch of smash stack stuff, and memory protection stuff which cases the program to segfault in unexpected manners while being strapped to the wheel of a motogp motorbike, and driving on spikes while you listen to abba | 20:24 |
stekern | root@or1k-debian:~# strace ls 2>&1 | grep or1k_atomic -v -c | 20:24 |
stekern | 161 | 20:24 |
blueCmd | olofk: see above | 20:25 |
stekern | dig in the Dancing Queen? | 20:26 |
olofk | stekern: Thanks. Now my puny brain just have to figure out what I'm reading :) | 20:26 |
olofk | Is that the number of calls to or1k_atomic vs. other sys calls? | 20:26 |
stekern | blueCmd: I've started playing a bit with SMPing Linux | 20:28 |
olofk | ams: Are you debugging eniac or something? | 20:29 |
stekern | first step is to get wallento's mor1kx demo to boot Linux on one of the cores... | 20:30 |
ams | olofk: open genera, ivory, lisp machine | 20:33 |
olofk | A LISP machine? That's cool. Never thought I would come across someone who actually used one :) | 20:35 |
olofk | Are you using it to parse your .emacs file? | 20:35 |
ams | olofk: i've used a real one too ... | 20:35 |
ams | and i happen to have source code for the ivory emulator that ran on the alpha ... and been fixing it for uhm, more sanity. | 20:36 |
blueCmd | stekern: cool! don't let me stop your progress, I don't have time to do everything I want to do anyway :) | 20:36 |
blueCmd | stekern: do you have a rough break-down of your milestones? | 20:37 |
stekern | blueCmd: I'll keep it transparent and bazaary enough for you to chip in, don't worry ;) | 20:37 |
olofk | stekern: Do you have a link to the old arch spec where the origianl atomic operations were written down? Or just tell me if there were any descriptions, or if they were just mentioned | 20:37 |
blueCmd | stekern: nice! | 20:37 |
stekern | umm, milestones... hack on it until it works? that good enough? =) | 20:39 |
blueCmd | stekern: well, normally when I work I have stuff like "make it compile", "run a simple program" and so on - not just the waterfall "do it" model :P | 20:40 |
blueCmd | I mean, what does "SMPing" entail? | 20:40 |
stekern | isn't "tackle the problems as they appear" quite the opposite to the waterfall model? | 20:43 |
blueCmd | too tierd, not gonna argue | 20:45 |
stekern | but I digress, first milestone is to solve the issue with register storage on exceptions | 20:45 |
* blueCmd is traveling around in Sweden this week and giving lectures on what he does for a living | 20:45 | |
blueCmd | with that, 1 day in one city turns out to be quite taxing :( | 20:46 |
stekern | I can imagine | 20:48 |
olofk | blueCmd: Ping me if you're in Gothenburg and would like to meet up for a coffee or something | 20:48 |
blueCmd | olofk: what are you doing tomorrow? | 20:48 |
blueCmd | turns out that's where I am tomorrow | 20:48 |
blueCmd | and my lecture is cancelled there | 20:48 |
olofk | blueCmd: Force feeding my baby penicillin. Other than that... not much | 20:48 |
olofk | If you're not interested in hanging out on the town, you can come around to my house? I can pick you up in that case | 20:49 |
blueCmd | olofk: will I get deadly sick by super-baby-germs? | 20:50 |
olofk | Nah. It's just an ear infection, so if you keep your fingers out from her ears you'll be fine | 20:50 |
blueCmd | let's switch to PM | 20:51 |
olofk | Please stop implementing stuff for a while. This blog post will never get finished if you keep up this speed | 22:28 |
--- Log closed Fri May 09 00:00:40 2014 |
Generated by irclog2html.py 2.15.2 by Marius Gedminas - find it at mg.pov.lt!