home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 576337745 and user = 6042212 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • martindurant · 2 ✖

issue 1

  • Errors using to_zarr for an s3 store · 2 ✖

author_association 1

  • CONTRIBUTOR 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
605222008 https://github.com/pydata/xarray/issues/3831#issuecomment-605222008 https://api.github.com/repos/pydata/xarray/issues/3831 MDEyOklzc3VlQ29tbWVudDYwNTIyMjAwOA== martindurant 6042212 2020-03-27T19:11:59Z 2020-03-27T19:11:59Z CONTRIBUTOR

Note that s3fs and gcsfs now expose the kwargs skip_instance_cache use_listings_cache, listings_expiry_time, and max_paths and pass them to fsspec. See https://filesystem-spec.readthedocs.io/en/latest/features.html#instance-caching and https://filesystem-spec.readthedocs.io/en/latest/features.html#listings-caching

(although the new releases for both already include the change that accessing a file, contents or metadata, does not require a directory listing, which is the right thing for zarr, where the full paths are known)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Errors using to_zarr for an s3 store 576337745
595379998 https://github.com/pydata/xarray/issues/3831#issuecomment-595379998 https://api.github.com/repos/pydata/xarray/issues/3831 MDEyOklzc3VlQ29tbWVudDU5NTM3OTk5OA== martindurant 6042212 2020-03-05T18:32:38Z 2020-03-05T18:32:38Z CONTRIBUTOR

https://github.com/intake/filesystem_spec/pull/243 is where my attempt to fix this kind of thing will live.

However, writing or deleting keys should invalidate the appropriate part of the cache as it currently stands, so I don't know why the problem has arisen. If it is a cache problem, then s3.invalidate_cache() can always be called.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Errors using to_zarr for an s3 store 576337745

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.095ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows