home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 969021506

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/5878#issuecomment-969021506 https://api.github.com/repos/pydata/xarray/issues/5878 969021506 IC_kwDOAMm_X845whhC 1197350 2021-11-15T15:25:37Z 2021-11-15T15:25:46Z MEMBER

So there are two layers here where caching could be happening: - gcsfs / fsspec (python) - gcs itself

I propose we eliminate the python layer entirely for the moment. Whenever you load the dataset, it's shape is completely determined by whatever zarr sees in gs://ldeo-glaciology/append_test/test5/temperature/.zarray. So try looking at this file directly. You can figure out its public URL and just do curl, e.g. curl https://storage.googleapis.com/ldeo-glaciology/append_test/test5/temperature/.zarray { "chunks": [ 3 ], "compressor": { "blocksize": 0, "clevel": 5, "cname": "lz4", "id": "blosc", "shuffle": 1 }, "dtype": "<i8", "fill_value": null, "filters": null, "order": "C", "shape": [ 6 ], "zarr_format": 2 }

Run this from jupyterhub from the command line. Then try gcs.cat('ldeo-glaciology/append_test/test5/temperature/.zarray' and see if you see the same thing. Basically just eliminate as many layers as possible from the problem until you get to the core issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  1030811490
Powered by Datasette · Queries took 0.683ms · About: xarray-datasette