issue_comments: 410357191
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2159#issuecomment-410357191 | https://api.github.com/repos/pydata/xarray/issues/2159 | 410357191 | MDEyOklzc3VlQ29tbWVudDQxMDM1NzE5MQ== | 35968931 | 2018-08-03T19:44:32Z | 2018-08-03T19:44:32Z | MEMBER | Thanks @jnhansen ! I actually ended up writing my own, much lower level, version of this using the netcdf library. The reason I did that was because I was finding it hard to work out how to merge multiple datasets, then write the data out to a new netcdf file in chunks - I kept accidentally loading the entire merged dataset into memory at once. This might just be because I wasn't using the dask integration properly though. Have you tried using your function to merge netcdf files, then write out a single file which is larger than RAM? Is that even possible in xarray? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
324350248 |