issue_comments: 1315919098
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/5878#issuecomment-1315919098 | https://api.github.com/repos/pydata/xarray/issues/5878 | 1315919098 | IC_kwDOAMm_X85Ob1T6 | 48723181 | 2022-11-15T22:04:20Z | 2022-11-15T22:04:20Z | NONE | This is my latest attempt to avoid the cache issue. It is not working. But I wanted to document it here for the next time this comes up. 1. Run the following in a local jupyter notebook``` import fsspec import xarray as xr import json import gcsfs define a mapper to the ldeo-glaciology bucketneeds a tokenwith open('/Users/jkingslake/Documents/misc/ldeo-glaciology-bc97b12df06b.json') as token_file: token = json.load(token_file) filename = 'gs://ldeo-glaciology/append_test/test56' mapper = fsspec.get_mapper(filename, mode='w', token=token) define two simple datasetsds0 = xr.Dataset({'temperature': (['time'], [50, 51, 52])}, coords={'time': [1, 2, 3]}) ds1 = xr.Dataset({'temperature': (['time'], [53, 54, 55])}, coords={'time': [4, 5, 6]}) write the first ds to bucketds0.to_zarr(mapper) ``` 2. run the following in a local terminal
3. Run the following in the local notebook``` append the second ds to the same zarr storeds1.to_zarr(mapper, mode='a', append_dim='time') ds = xr.open_dataset('gs://ldeo-glaciology/append_test/test56', engine='zarr', consolidated=False) len(ds.time) ``` 3 At least it sometimes does this and sometimes work later, and sometimes works immediately. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1030811490 |