issue_comments: 400906996
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2256#issuecomment-400906996 | https://api.github.com/repos/pydata/xarray/issues/2256 | 400906996 | MDEyOklzc3VlQ29tbWVudDQwMDkwNjk5Ng== | 1197350 | 2018-06-28T04:27:38Z | 2018-06-28T04:27:38Z | MEMBER | Thanks for the extra info! I am still confused about what you are trying to achieve. What do you mean by "cache"? Is your goal to compress the data so that it uses less space on disk? Or is it to provide a more "analysis ready" format? In other words, why do you feel you need to transform this data to zarr? Why not just work directly with the netcdf files? Sorry to keep asking questions rather than providing any answers! Just trying to understand your goals... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
336458472 |