issue_comments: 306850272
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1396#issuecomment-306850272 | https://api.github.com/repos/pydata/xarray/issues/1396 | 306850272 | MDEyOklzc3VlQ29tbWVudDMwNjg1MDI3Mg== | 9655353 | 2017-06-07T16:30:04Z | 2017-06-07T16:50:40Z | NONE | That's great to know! I think there's no need to try my 'solution' then, maybe only out of pure interest. It would of course be interesting to know why a 'custom' chunked dataset was apparently not affected by the bug. And if it was indeed the case. EDIT: I read the discussion on dask github and the xarray mailinglist. It's probably because when explicit chunking is used, the chunks are not aliased and fusing works as expected. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
225774140 |