issues: 726020233
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
726020233 | MDU6SXNzdWU3MjYwMjAyMzM= | 4527 | Refactor `xr.save_mfdataset()` to automatically save an xarray object backed by dask arrays to multiple files | 13301940 | open | 0 | 2 | 2020-10-20T23:48:21Z | 2020-10-22T17:06:46Z | MEMBER | Is your feature request related to a problem? Please describe. Currently, when a user wants to write multiple netCDF files in parallel with xarray and dask, they can take full advantage of A few months ago, I wrote a blog post showing how to save an xarray dataset backed by dask into multiple netCDF files, and since then I've been meaning to request a new feature to make this process convenient for users. Describe the solution you'd like Would it be useful to actually refactor the existing ```python ds.save_mfdataset(prefix="directory/my-dataset") orxr.save_mfdataset(ds, prefix="directoy/my-dataset") ``` ----> ```bash directory/my-dataset-chunk-1.nc directory/my-dataset-chunk-2.nc directory/my-dataset-chunk-3.nc .... ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
13221727 | issue |