issue_comments: 409565674
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/2329#issuecomment-409565674 | https://api.github.com/repos/pydata/xarray/issues/2329 | 409565674 | MDEyOklzc3VlQ29tbWVudDQwOTU2NTY3NA== | 12278765 | 2018-08-01T12:58:31Z | 2018-08-01T12:58:31Z | NONE | I ran a comparison of the impact of chunk sizes with a profiler:
I am not sure if the profiler results are useful:
In the case of chunks on I don't know if this helps, but it would be great to have a solution or workaround for that. Surely I am not the only one working with dataset of that size? What would be the best practice in my case? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
345715825 |