issue_comments: 881641897
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/5604#issuecomment-881641897 | https://api.github.com/repos/pydata/xarray/issues/5604 | 881641897 | IC_kwDOAMm_X840jMmp | 5635139 | 2021-07-16T18:36:45Z | 2021-07-16T18:36:45Z | MEMBER | The memory usage does seem high. Not having the indexes aligned makes it into an expensive operation, and I would vote to have that fail by default ref (https://github.com/pydata/xarray/discussions/5499#discussioncomment-929765). Can the input files be aligned before attempting to combine the data? Or are you not in control of the input files? To debug the memory, you probably need to do something like use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
944996552 |