issue_comments: 414389492
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/2261#issuecomment-414389492 | https://api.github.com/repos/pydata/xarray/issues/2261 | 414389492 | MDEyOklzc3VlQ29tbWVudDQxNDM4OTQ5Mg== | 1217238 | 2018-08-20T17:01:38Z | 2018-08-20T17:01:38Z | MEMBER | This is ready for further review and testing. Things are working for writes with dask-distributed, including with h5netcdf (requires the 0.6.2 release of h5netcdf) and on Windows (https://github.com/pydata/xarray/issues/1738). Follow-ups for future work: - I managed to work around the need for a reentrant lock (https://github.com/dask/dask/issues/3832) but using a reentrant lock would be a nice clean-up. - Currently I'm using the "close after each write" strategy with dask-distributed (https://github.com/dask/distributed/issues/2163). This works OK for netCDF4 and h5netcdf, but for the SciPy netCDF writer it's basically a non-starter, because SciPy only writes complete files (https://github.com/scipy/scipy/issues/9157) -- so I'm still having SciPy raise an error. It would be nice to also support the "write complete files" strategy, which could have significantly better performance at the cost of memory usage. We might need some new API for this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
337267315 |