issue_comments: 1014462537
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3213#issuecomment-1014462537 | https://api.github.com/repos/pydata/xarray/issues/3213 | 1014462537 | IC_kwDOAMm_X848d3hJ | 40465719 | 2022-01-17T12:20:18Z | 2022-01-17T12:20:18Z | NONE | I know. But having sparse data I can treat as if it were dense allows me to unstack without running out of memory, and then ffill & downsample the data in chunks: It would be nice if xarray automatically converted the data from sparse back to dense for doing operations on the chunks just like pandas does. The picture shows that I'm already using nbytes to determine the size. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
479942077 |