issue_comments: 489236748
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1849#issuecomment-489236748 | https://api.github.com/repos/pydata/xarray/issues/1849 | 489236748 | MDEyOklzc3VlQ29tbWVudDQ4OTIzNjc0OA== | 12465248 | 2019-05-03T20:54:50Z | 2019-05-03T20:54:50Z | CONTRIBUTOR | @dcherian Thanks, First, I think you're right that the Second, my example shows something more slightly complicated than the original example which was also not clear to me. In my case the unlimited dimension ( This makes sense upon a slightly more nuanced reading of the netcdf4 manual (as quoted my markelg)
The last sentence apparently means that for any variable with an unlimited dimension the use of I propose that the solution should be both a) delete encoding['contiguous'] if it is True when asked to write out a variable containing an unlimited dimension. b) raise an informative warning that the variable was chunked because it contained an unlimited dimension. (If a user hates warnings, they could can handle this deletion herself. One the other hand, there's really nothing else to do, so I'm not sure the warning is necessary... I dont have strong opinion on this, but the code is fiddling with the encodings under the hood, so a warning seems polite). A final question: should the encoding['contiguous'] be removed from the xarray variable or should it just be removed for purposes of writing it to ncdf4 on disk? I suppose a user could be writing the xarray dataset to another format that might allow what netcdf does not allow. This should be an easy detail. I'll make a PR with the above and we can evaluate the concrete changes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
290572700 |