html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/3831#issuecomment-605222008,https://api.github.com/repos/pydata/xarray/issues/3831,605222008,MDEyOklzc3VlQ29tbWVudDYwNTIyMjAwOA==,6042212,2020-03-27T19:11:59Z,2020-03-27T19:11:59Z,CONTRIBUTOR,"Note that s3fs and gcsfs now expose the kwargs `skip_instance_cache` `use_listings_cache`, `listings_expiry_time`, and `max_paths` and pass them to `fsspec`. See https://filesystem-spec.readthedocs.io/en/latest/features.html#instance-caching and https://filesystem-spec.readthedocs.io/en/latest/features.html#listings-caching (although the new releases for both already include the change that accessing a file, contents or metadata, does *not* require a directory listing, which is the right thing for zarr, where the full paths are known)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,576337745 https://github.com/pydata/xarray/issues/3831#issuecomment-595379998,https://api.github.com/repos/pydata/xarray/issues/3831,595379998,MDEyOklzc3VlQ29tbWVudDU5NTM3OTk5OA==,6042212,2020-03-05T18:32:38Z,2020-03-05T18:32:38Z,CONTRIBUTOR,"https://github.com/intake/filesystem_spec/pull/243 is where my attempt to fix this kind of thing will live. However, writing or deleting keys should invalidate the appropriate part of the cache as it currently stands, so I don't know why the problem has arisen. If it is a cache problem, then `s3.invalidate_cache()` can always be called.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,576337745