issue_comments: 843971807
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/4156#issuecomment-843971807 | https://api.github.com/repos/pydata/xarray/issues/4156 | 843971807 | MDEyOklzc3VlQ29tbWVudDg0Mzk3MTgwNw== | 5637662 | 2021-05-19T10:33:08Z | 2021-05-19T10:33:08Z | CONTRIBUTOR | I have hacked something that does support the reading and writing of sparse arrays to a netcdf file, however I didn't know how and where to put this within xarray. ``` def ds_to_netcdf(ds, fn): dsorg = ds ds = dsorg.copy() for v in ds: if hasattr(ds[v].data, "nnz") and ( hasattr(ds[v].data, "to_coo") or hasattr(ds[v].data, "linear_loc") ): coord = f"{v}_xarray_index" assert coord not in ds data = ds[v].data if hasattr(data, "to_coo"): data = data.to_coo() ds[coord] = coord, data.linear_loc() dims = ds[v].dims ds[coord].attrs["compress"] = " ".join(dims) at = ds[v].attrs ds[v] = coord, data.data ds[v].attrs = at ds[v].attrs["fill_value"] = str(data.fill_value) for d in dims: if d not in ds: ds[f"_len{d}"] = len(dsorg[d])
``` ``` def xr_open_dataset(fn): ds = xr.open_dataset(fn)
``` Has there been any progress since last year? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
638947370 |