html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4527#issuecomment-714633238,https://api.github.com/repos/pydata/xarray/issues/4527,714633238,MDEyOklzc3VlQ29tbWVudDcxNDYzMzIzOA==,2448579,2020-10-22T17:06:46Z,2020-10-22T17:06:46Z,MEMBER,"> Perhaps split_chunks()?
There was a proposal for `.blocks` (https://github.com/pydata/xarray/issues/3147#issuecomment-513044413) I agree that 'chunk id' would be useful. Does dask expose that somehow?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,726020233
https://github.com/pydata/xarray/issues/4527#issuecomment-714625909,https://api.github.com/repos/pydata/xarray/issues/4527,714625909,MDEyOklzc3VlQ29tbWVudDcxNDYyNTkwOQ==,1217238,2020-10-22T16:53:45Z,2020-10-22T16:53:45Z,MEMBER,"I agree, this does sound useful!
It might make sense to split this into a few pieces of functionality:
1. A new helper function that splits an xarray object into separate objects for each chunk, including some representation of the ""chunk id"". Perhaps `split_chunks()`?
2. A new higher level function that combines (1) and the existing `save_mfdataset` to automatically save an xarray object into multiple files. This probably should be a new function rather than using the existing `save_mfdataset` because the API is different.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,726020233