issue_comments: 347983448
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1528#issuecomment-347983448 | https://api.github.com/repos/pydata/xarray/issues/1528 | 347983448 | MDEyOklzc3VlQ29tbWVudDM0Nzk4MzQ0OA== | 1197350 | 2017-11-29T20:18:08Z | 2017-11-29T20:18:08Z | MEMBER | Right now I am in a dilemma over how to move forward. Fixing this string encoding issue will require some serious hacks to cf encoding. If I do this before #1087 is finished, it will be a waste of time (and a pain). On the other hand #1087 could take a long time, since it is a major refactor itself. Is there some way to punt on the multi-length string encoding for now? We could just error if such variables are present. This would allow us to get the experimental zarr backend out into the wild. FWIW, none of the datasets I want to use this with actually have any string data variables at all. I believe 95% of netcdf datasets are just regular numbers. This is an edge case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
253136694 |