issue_comments: 398198466
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/2237#issuecomment-398198466 | https://api.github.com/repos/pydata/xarray/issues/2237 | 398198466 | MDEyOklzc3VlQ29tbWVudDM5ODE5ODQ2Ng== | 1217238 | 2018-06-18T21:16:24Z | 2018-06-18T21:16:24Z | MEMBER | I vaguely recall discussing chunks that result from indexing somewhere in the dask issue tracker (when we added the special case for a monotonic increasing indexer to preserve chunks), but I can't find it now. I think the challenge is that it isn't obvious what the right chunksizes should be. Chunks that are too small also have negative performance implications. Maybe the automatic chunking logic that @mrocklin has been looking into recently would be relevant here. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
333312849 |