IRC logs for #openrisc Wednesday, 2014-06-25

--- Log opened Wed Jun 25 00:00:49 2014
wallentostekern: will continue the asynchrounous communication :)06:38
wallentothe implementation does not necessarily need to stall the pipeline on a snoop hit06:39
wallentoit writes the tag memory and the cache state machine is accordingly changed to stall the pipeline only if there is a conflict06:40
wallentothis works easily for a read and an invalidation, refill is not a problem at all06:40
wallentothe only left is writes06:40
wallentohere the problem i described yesterday comes up, but it is more an LSU thing than a pipeline thing I am sure now06:41
wallentothe issue is obvious when using the normal tag memory06:42
wallentoif you are sure we don't want to follow this any more (always extra snoop memory, maybe with the option OPTION_CACHE_SNOOP == "ENABLED") then it works now. I can remove all code that is related to tag memory multiplexing and we are fine, but I can also keep it in and continue documentation and add comments why the tag memory multiplexing did not work06:44
wallentoregarding the documentation: I will continue my large comment on top with everything I understood now regarding the cpu->lsu->cache interfaces, hopefully you can check it for correctness :)06:45
wallentofinally one suggestion: lets make operator precedence explicit. It always needs extra thinking to check what a | b & c & !d means now, especially multiline assignments should also be structured (using brackets to logically group statements). I think that makes changing the code for an outsider much easier06:46
wallentostekern: I did a clean restart here:
wallentoonly with extra tag memory08:28
wallentoI will continue tomorrow, I think I may have missed small pieces, but this should be generally it08:28
wallentoWith the elf I uploaded, the first snoop hit exactly happens during a sequence of four writes @144.18 us. At the moment one is simply lost, maybe we can take up the discussion at this state then, maybe we can also arrange a time where we both are online :-D08:30
stekern"if there is a conflict" isn't there always a conflict?08:38
stekernI mean, there's only one write port on the tag mem, and you write to it on every cycle08:38
stekernoperator precedence, personally, I think the brackets make the code harder to read...08:40
stekernthat said, there are maybe some multiline statements that would be more clear with brackets somewhere08:42
stekernand cases that the statements can be broken down into smaller sub-statements for that matter too08:45
wallentoconflict is only if there is a cache access, I mean you only should stall pipeline if there is also a cache access08:50
wallentoI assume lsu_valid_o is only used when there is an lsu access, then you are right08:51
stekernah, yes. of course08:52
stekernand it is08:52
stekernI tried that on your old code, but it didn't quite work though08:52
wallentoby simple &ing lsu_valid_o and snoop_hit I get the same issue as before08:52
wallentomeaning I need to handle addresses differently, what should be possible08:53
wallentoI did this: assign lsu_valid_o = (lsu_ack | access_done) & !tlb_reload_busy & !dc_snoop_hit;08:54
wallentonow to my but08:54
wallentowe will get a very long critical path08:54
wallentoI can see that req_i goes down immediately, so it is combinational with lsu_valid08:55
stekernthe problem is that the writes to storebuffer will go through if you just connect it straight to lsu_valid08:55
wallentoah, yes, I also put it into store_buffer_write08:55
stekernyeah, at least since you use stb to determine the snoop08:56
wallentothe critical path starts in pipeline, depends on an expensive comparison and goes back into the pipeline and back to cache08:56
wallentoit seems now, but I need to think it through08:56
stekernyou shouldn't really do that, connect ack & stb08:56
wallentoack & stb?08:57
wallentoin the bus?08:57
stekernyeah, but I realised that that isn't related to the critical path you are speaking about08:58
stekernthe critical path shouldn't be any more critical than the current dc_ack path, no?08:59
stekernbut yeah, it's gonna make more critical paths, for sure. Don't think you can avoid that though09:00
wallentomaybe we can handle stuff more complex in cache to break the critical path09:00
wallentowrite down which are early and which are late signals09:01
wallentoand req_i seems the very latest and should not depend on the snoop comparison, I assume09:01
stekernI think even snoop_hit could be registered09:01
stekernI mean, it's not like cache coherency can be expected to be atomic, right?09:02
wallentoyes, if the current write address is registered then accordingly09:02
stekernyes, of course09:02
wallentono, but is more the problem of that the pipeline advances, but the write did not properly take effect09:02
wallentothen we can delay the next write implicitly and take the old address09:03
wallentothis should be possible09:03
stekernright, but that's just a bug that needs fixing ;)09:03
wallentoit is something in the WRITE: ... in the sequential part :-D09:04
wallentobut I am not sure what ;)09:04
stekernin the lsu or cache?09:04
wallentoor in the dc_adr = .. in the LSU09:04
wallentobut fortunately it is much easier to develop now without the tag memory multiplexing09:05
wallentoi will continue it later tonight, need to do some work now :)09:06
olofkI'm considering reverting the move of the section submodule to a .py in FuseSoC. It seems to cause a lot of problems for people upgrading and it doesn't really add any value11:03
arokuxhi guys, what projects are you currently working on?13:09
olofkarokux: Currently working on removing everything in the house up to one meter that is either sharp or could be swallowed13:57
arokuxolofk: and if asked about openrisc?13:58
olofkNot much lately, but extending and bugfixing FuseSoC, making ports for a few new boards and wishbone infrastructure stuff13:59
arokuxI see14:15
wallentostekern: First test passes again15:43
wallento[CfE - Call for Elfs] Needs more testing now15:44
wallento~400 snoop hits for newlib setup, two mallocs and counting on the same cache line15:46
olofkSorry. Baby on the keyboard15:48
wallentoyou have a ǻ on your keyboard?15:49
olofkApparently. She has managed to find all kinds of crazy shortcuts and symbols :)15:49
wallentoah, like this turn windows desktop upside down combination grandmas find :)15:50
olofkExactly like that :)16:02
olofkbtw wallento. I think we should start sending out invitations to orconf soon, but you will have to have the final word on how many people we can fit in16:03
wallentowe can fit in as many as you want16:03
wallentowe should definitely start inviting people16:04
wallentooptimal is something around 30-40, as they fit in our seminar room16:04
olofkSounds reasonable.16:04
olofkI'll prepare some special invitations for the people I wrote down in the mail some time ago, and we can go out with an official invitation and event registration when we get around to that16:05
wallentowe can also get an auditorium with arbitrary number of places, as they are mostly free on weekends16:06
wallentobut I think a seminar setting is much better16:06
wallentoplus there is a coffee machine, table soccer and beer16:06
olofkYes, most people are probably there both for the talks and to get some work done16:06
* olofk is calm now16:06
wallentothere is also a room with 18 computer places on the same floor16:07
wallentoand a separate discussion room with 12 places16:07
wallentoi think this fits the conference setting well16:08
olofkI think that aiming for 30-40 is a good idea, but it sounds like there are possibly other options if we have seriously under- or overestimated the interest16:08
wallentoI can find an alternative on pretty short notice as it is a non-profit event16:09
olofkThat sounds very good16:09
olofkSo... we'll start a first round of invitations then. And if someone wants to set up a dedicated site, just ping me and I'll reroute the domain there16:10
olofkBonus points if someone hosts it on an FPGA board running OpenRISC :)16:12
olofkextra bonus points if the web server is written in verilog16:13
olofksb0: You are on my list of people to invite to orconf, but as I have noticed that you hang around here nowadays I just say that we would be delighted to have you join us either as a presenter or as a regular visitor18:28
olofkThat doesn't mean I'm excluding any of you other people in here. You are of course all very welcome to join us18:29
sb0October in Munich? hmm... I'll have moved to Hong Kong. I'll think about it, though it'll probably be difficult to come. thanks for the invite :)18:29
olofkYou mentioned moving there some time ago. Have you settled down now? Is it all you hoped for? ;)18:30
sb0not yet - but the date is set, July 29 :)18:31
olofkWell, I hope that your future self will enjoy it there then :)18:34
Findetonwhen/where is it going to take place?19:27
Findetonoh, reading it: october in munich19:28
Findetonwell by october I might be living there actually lol19:28
olofkFindeton: Then you have no excuse to miss out :)19:34
olofkI should dust off my book on data structures21:33
olofkjuliusb: Nice picture of München :)22:16
olofkI think I can add mail aliases to the domain as well. Should we use that perhaps? Like info@orconf.org22:18
juliusbolofk: that's a great idea. I wasn't sure about our personal emails on there22:52
juliusbinfo@orconf and organisers@ (maybe the American spelling, too, in case Wilson Snyder decides to email us)22:52
--- Log closed Thu Jun 26 00:00:51 2014

Generated by 2.15.2 by Marius Gedminas - find it at!