home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "CONTRIBUTOR" and issue = 98587746 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • scottcha 2

issue 1

  • Ignore missing variables when concatenating datasets? · 2 ✖

author_association 1

  • CONTRIBUTOR · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
553993804 https://github.com/pydata/xarray/issues/508#issuecomment-553993804 https://api.github.com/repos/pydata/xarray/issues/508 MDEyOklzc3VlQ29tbWVudDU1Mzk5MzgwNA== scottcha 775186 2019-11-14T17:30:09Z 2019-11-14T17:30:09Z CONTRIBUTOR

Ok got it, I'll take a look and spin up a PR.
Thanks

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ignore missing variables when concatenating datasets? 98587746
553709888 https://github.com/pydata/xarray/issues/508#issuecomment-553709888 https://api.github.com/repos/pydata/xarray/issues/508 MDEyOklzc3VlQ29tbWVudDU1MzcwOTg4OA== scottcha 775186 2019-11-14T03:37:59Z 2019-11-14T04:09:02Z CONTRIBUTOR

I just ran in to this issue. While the previous fix seems to handle one case it doesn't handle all the cases. Before I clean this up and open a new PR does this look like its on the right track (it worked for my issue where I was concating multiple datasets which always had the same dims and coordinates but sometimes were missing variables)?

starts at line 353 on concat.py for k in datasets[0].variables: if k in concat_over: try: #new code for ds in datasets: if k not in ds.variables: #make a new array with the same dimensions and coordinates #by default this will be initialized to np.nan which is what we want from .dataarray import DataArray new_array = DataArray(coords=ds.coords, dims=ds.dims) ds[k] = new_array #end new code vars = ensure_common_dims([ds.variables[k] for ds in datasets]) except KeyError: #this can likely be removed then raise ValueError("%r is not present in all datasets." % k) combined = concat_vars(vars, dim, positions) assert isinstance(combined, Variable) result_vars[k] = combined

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ignore missing variables when concatenating datasets? 98587746

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 147.009ms · About: xarray-datasette