issue_comments: 521221473
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3213#issuecomment-521221473 | https://api.github.com/repos/pydata/xarray/issues/3213 | 521221473 | MDEyOklzc3VlQ29tbWVudDUyMTIyMTQ3Mw== | 6213168 | 2019-08-14T12:15:39Z | 2019-08-14T12:20:59Z | MEMBER | +1 for the introduction of to_sparse() / to_dense(), but let's please avoid the mistakes that were done with chunk(). DataArray.chunk() is extremely frustrating when you have non-index coords and, 9 times out of 10, you only want to chunk the data and you have to go through the horrid
Possibly we could define them as ```python class DataArray: def to_sparse( self, data: bool = True, coords: Union[Iterable[Hashable], bool] = False ) class Dataset: def to_sparse( self, data_vars: Union[Iterable[Hashable], bool] = True, coords: Union[Iterable[Hashable], bool] = False ) ``` same for to_dense() and chunk() (the latter would require a DeprecationWarning for a few release before switching the default for coords from True to False - only to be triggered in presence of dask-backed coords). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
479942077 |