issue_comments: 1494027739
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7409#issuecomment-1494027739 | https://api.github.com/repos/pydata/xarray/issues/7409 | 1494027739 | IC_kwDOAMm_X85ZDQ3b | 85181086 | 2023-04-03T09:58:10Z | 2023-04-03T09:58:10Z | NONE | @gcaria , Your solution looks like a reasonable approach to convert a 1D or 2D chunked DataArray to a dask DataFrame or Series, respectively. this solution will only work if the DataArray is chunked along one or both dimensions. If the DataArray is not chunked, then calling to_dask() will return an equivalent in-memory pandas DataFrame or Series. One potential improvement you could make is to add a check to ensure that the chunking is valid for conversion to a dask DataFrame or Series. For example, if the chunk sizes are too small, the overhead of parallelism may outweigh the benefits. Here's an updated version of your code that includes this check: ``` import dask.dataframe as dkd import xarray as xr from typing import Union def to_dask(da: xr.DataArray) -> Union[dkd.Series, dkd.DataFrame]:
``` This code adds a check to ensure that the chunk sizes are not too small (in this case, we've set the minimum chunk size to 100,000). If any of the chunks have a size smaller than the minimum, then the function raises a ValueError. You can adjust the minimum chunk size as needed for your specific use case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1517575123 |