issue_comments: 516043946
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3096#issuecomment-516043946 | https://api.github.com/repos/pydata/xarray/issues/3096 | 516043946 | MDEyOklzc3VlQ29tbWVudDUxNjA0Mzk0Ng== | 18643609 | 2019-07-29T15:37:27Z | 2019-07-29T15:38:31Z | NONE | Coming back on this issue (still haven't had time to try the open_mfdataset approach), I have another use case where I would like to store different variables being indexed by the same dimension, but not all available at the same moment. For example, I would have variables V1 and V2 indexed on dimension D1. V1 would be available at time T, and I would like to store it in my S3 bucket at this moment, but V2 would only be available at time T+1. In this case, I would like to be able to save the values of V2 at time T+1, leaving the missing V2 values filled with the fill_value specified in the metadata between T and T+1. What actually happens is that you can append such data, but then if you want to open the resulting zarr the open_zarr function needs to be given V2 as value for its drop_variables argument, otherwise you get the error shown in my original post. However, as the open_zarr function is called when appending as well (cf. original post's error trace), and in this case you can not provide this argument, you will fail the next append attempts, thus preventing you from appending the values of V2. Your dataset is now frozen. Am I misusing the functionality, or do you know any workaround using xarray and not coding everything myself (for optimization reasons)? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
466994138 |