issue_comments: 408860643
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2329#issuecomment-408860643 | https://api.github.com/repos/pydata/xarray/issues/2329 | 408860643 | MDEyOklzc3VlQ29tbWVudDQwODg2MDY0Mw== | 1197350 | 2018-07-30T13:20:59Z | 2018-07-30T13:20:59Z | MEMBER | @lrntct - this sounds like a reasonable way to use zarr. We routinely do this sort of transcoding and it works reasonable well. Unfortunately something clearly isn't working right in your case. These things can be hard to debug, but we will try to help you. You might want to start by reviewing the guide I wrote for Pangeo on preparing zarr datasets. It would also be good to see a bit more detail. You posted a function If instead you have just one big netCDF file as in the example you posted above, I think I see you problem: you are calling More ideas:
- explicitly specify the chunks (rather than using Another useful piece of advice would be to use the dask distributed dashboard to monitor what is happening under the hood. You can do this by running
Hopefully these ideas can help you move forward. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
345715825 |