home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER" and issue = 501150299 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • shoyer 2
  • dcherian 1

issue 1

  • Make concat more forgiving with variables that are being merged. · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
541455875 https://github.com/pydata/xarray/pull/3364#issuecomment-541455875 https://api.github.com/repos/pydata/xarray/issues/3364 MDEyOklzc3VlQ29tbWVudDU0MTQ1NTg3NQ== shoyer 1217238 2019-10-13T20:24:29Z 2019-10-13T20:24:29Z MEMBER

Okay, sounds good then!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make concat more forgiving with variables that are being merged. 501150299
541419389 https://github.com/pydata/xarray/pull/3364#issuecomment-541419389 https://api.github.com/repos/pydata/xarray/issues/3364 MDEyOklzc3VlQ29tbWVudDU0MTQxOTM4OQ== dcherian 2448579 2019-10-13T13:39:39Z 2019-10-13T13:39:39Z MEMBER

This is just for variables that are being merged and not concatenated.

python # determine which variables to merge, and then merge them according to compat variables_to_merge = (coord_names | data_names) - concat_over - dim_names

We are still raising an error if variables in concat_over are missing in some datasets.

python for k in datasets[0].variables: if k in concat_over: try: vars = ensure_common_dims([ds.variables[k] for ds in datasets]) except KeyError: raise ValueError("%r is not present in all datasets." % k)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make concat more forgiving with variables that are being merged. 501150299
541386349 https://github.com/pydata/xarray/pull/3364#issuecomment-541386349 https://api.github.com/repos/pydata/xarray/issues/3364 MDEyOklzc3VlQ29tbWVudDU0MTM4NjM0OQ== shoyer 1217238 2019-10-13T04:54:47Z 2019-10-13T04:54:47Z MEMBER

I am very sympathetic to the idea of not requiring all matching variables, but do we want to handle it like this (not adding the dimension) or by adding in NaNs?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make concat more forgiving with variables that are being merged. 501150299

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.797ms · About: xarray-datasette