home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 98587746 and user = 2448579 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • dcherian · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1144845488 https://github.com/pydata/xarray/issues/508#issuecomment-1144845488 https://api.github.com/repos/pydata/xarray/issues/508 IC_kwDOAMm_X85EPPSw dcherian 2448579 2022-06-02T13:10:04Z 2022-06-02T13:10:04Z MEMBER

Yes that is correct

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ignore missing variables when concatenating datasets? 98587746
553940815 https://github.com/pydata/xarray/issues/508#issuecomment-553940815 https://api.github.com/repos/pydata/xarray/issues/508 MDEyOklzc3VlQ29tbWVudDU1Mzk0MDgxNQ== dcherian 2448579 2019-11-14T15:33:52Z 2019-11-14T15:33:52Z MEMBER

Thanks for tackling this very important issue @scottcha ! from .dataarray import DataArray new_array = DataArray(coords=ds.coords, dims=ds.dims) ds[k] = new_array

Instead of creating a DataArray we only need to create a Variable (https://xarray.pydata.org/en/stable/internals.html#variable-objects).

I would instead try full_like(example_variable, fill_value=np.nan) (import full_like from the appropriate file). The trick would be figuring out what example_variable is. Maybe like this? (there may be some clever way to avoid the two loops)

``` python variables = [] for ds in datasets: if k in ds.variables: filled = full_like(ds.variables[k], fill_value=np.nan) break

for ds in datasets: if k not in ds.variables: variables.append(filled) else: variables.append(ds.variables[k])

vars = ensure_common_dims(variables) ```

Please send in a PR with any progress you make. We are happy to help out. We have some documentation on contributing and testing here: https://xarray.pydata.org/en/stable/contributing.html

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ignore missing variables when concatenating datasets? 98587746

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 9274.044ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows