issue_comments: 880854744
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/5604#issuecomment-880854744 | https://api.github.com/repos/pydata/xarray/issues/5604 | 880854744 | MDEyOklzc3VlQ29tbWVudDg4MDg1NDc0NA== | 49487505 | 2021-07-15T16:45:17Z | 2021-07-15T16:45:17Z | NONE | My temporary bypass around this is to do open_dataset on all of the files, storing the u and ubar in two separate lists and saving to file after doing an xr.concat on both of them They can be concatenated just fine and the file is about the expected size of 23Gb. The operation also takes up similar memory. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
944996552 |