issue_comments: 1063977656
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/6329#issuecomment-1063977656 | https://api.github.com/repos/pydata/xarray/issues/6329 | 1063977656 | IC_kwDOAMm_X84_awK4 | 6574622 | 2022-03-10T11:56:44Z | 2022-03-10T11:56:44Z | CONTRIBUTOR | Yes, this is kind of the behaviour I'd expect. And great that it helped clarifying things. Still, building up the metadata nicely upfront (which is required for region writes) ist quite convoluted... That's what I meant with
in the previous comment. I think, establishing and documenting good practices for this would help, but probably we also want to have better tools. In any case, this would probably be yet another issue. Note that if you care about this paricular example (e.g. appending in a single thread in increasing order of timesteps), then it should also be possible to do this much simpler using append: ```python filename='processed_dataset.zarr' ds = xr.tutorial.open_dataset('air_temperature') ds.air.encoding['dtype']=np.dtype('float32') X,Y=250, 250 #size of each final timestep for i in range(len(ds.time)): # some kind of heavy processing arr_r=some_processing(ds.isel(time=slice(i,i+1)),X,Y) del arr_r.air.attrs["_FillValue"] if os.path.exists(filename): arr_r.to_zarr(filename, append_dim='time') else: arr_r.to_zarr(filename) ``` If you find out more about the cloud case, please post a note, otherwise, we can assume that the original bug report is fine? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1159923690 |