home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 593825520 and user = 11750960 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • apatlpo · 4 ✖

issue 1

  • Element wise dataArray generation · 4 ✖

author_association 1

  • CONTRIBUTOR 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
610167347 https://github.com/pydata/xarray/issues/3932#issuecomment-610167347 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYxMDE2NzM0Nw== apatlpo 11750960 2020-04-07T04:32:12Z 2020-04-07T04:32:12Z CONTRIBUTOR

I'll close this for now as there doesn't seem to be other ideas about this

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609605285 https://github.com/pydata/xarray/issues/3932#issuecomment-609605285 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTYwNTI4NQ== apatlpo 11750960 2020-04-06T07:08:19Z 2020-04-06T07:08:19Z CONTRIBUTOR

This sounds like method 1 (with dask delayed) to me. There may be no faster option, thanks for giving it a thought @fujiisoup

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609407162 https://github.com/pydata/xarray/issues/3932#issuecomment-609407162 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTQwNzE2Mg== apatlpo 11750960 2020-04-05T12:17:15Z 2020-04-05T12:17:47Z CONTRIBUTOR

thanks a lot @fujiisoup, your suggestion does help getting rid of the necessity to build the ds['_y'] variable. Here is the updated apply_ufunc solution: ``` x = np.arange(10100) y = np.arange(20100)

ds = xr.Dataset(coords={'x': x, 'y': y})

ds = ds.chunk({'x': 1, 'y':1}) # does not change anything

let's say each experiment outputs 5 statistical diagnostics

Nstats = 5 some_exp = lambda x, y: np.ones((Nstats,))

out = xr.apply_ufunc(some_exp, ds.x, ds.y, dask='parallelized', vectorize=True, output_dtypes=[float], output_sizes={'stats': Nstats}, output_core_dims=[['stats']]) ``` An inspection of the dask dashboard indicates that the computation is not distributed among workers though. How could I make sure this happens?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609407192 https://github.com/pydata/xarray/issues/3932#issuecomment-609407192 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTQwNzE5Mg== apatlpo 11750960 2020-04-05T12:17:26Z 2020-04-05T12:17:26Z CONTRIBUTOR

sorry closed by accident

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.521ms · About: xarray-datasette