issue_comments: 451206728
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2624#issuecomment-451206728 | https://api.github.com/repos/pydata/xarray/issues/2624 | 451206728 | MDEyOklzc3VlQ29tbWVudDQ1MTIwNjcyOA== | 2443309 | 2019-01-03T16:59:06Z | 2019-01-03T16:59:06Z | MEMBER | @ktyle - glad to hear things are moving for you. I'm pretty sure the last chunk in each of your datasets is smaller than the rest. So after concatenation, you end up with a small chunk in the middle and at the end of the time dimension. I bet if you used a chunk size of 172 (divides evenly into 2924), you wouldn't need to rechunk. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
393214032 |