home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 902031342 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • keewis · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
848900625 https://github.com/pydata/xarray/issues/5377#issuecomment-848900625 https://api.github.com/repos/pydata/xarray/issues/5377 MDEyOklzc3VlQ29tbWVudDg0ODkwMDYyNQ== keewis 14808389 2021-05-26T16:04:04Z 2021-05-26T16:04:04Z MEMBER

note that xarray.tutorial.open_rasterio will probably need something similar.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.tutorial.open_dataset should work even with locally preloaded cache. 902031342
848777509 https://github.com/pydata/xarray/issues/5377#issuecomment-848777509 https://api.github.com/repos/pydata/xarray/issues/5377 MDEyOklzc3VlQ29tbWVudDg0ODc3NzUwOQ== keewis 14808389 2021-05-26T13:38:20Z 2021-05-26T13:38:20Z MEMBER

note that the "unique" part of the file is the md5sum of the url, so this is not something random: python url = "https://github.com/mapbox/rasterio/raw/1.2.1/tests/data/RGB.byte.tif" filepath = pooch.os_cache("xarray_tutorial_data") / "-".join([ hashlib.md5(url.encode()).hexdigest(), path.name, ])

That said, this is because we use pooch.retrieve instead of pooch.create(...).fetch (which would save as the normal name, possibly in a separate directory if versioning was enabled), and we should probably add (date-based) versions to xarray-data so we can use the registry. Not sure, though.


In the meantime you could use fname even if the file does not already exist (which is actually supposed to be a file name, not a path) to make the patch a bit simpler, or recreate the "unique" filename using the hashed url.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.tutorial.open_dataset should work even with locally preloaded cache. 902031342

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 3277.346ms · About: xarray-datasette