home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where issue = 593825520 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • apatlpo 4
  • fujiisoup 2

author_association 2

  • CONTRIBUTOR 4
  • MEMBER 2

issue 1

  • Element wise dataArray generation · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
610167347 https://github.com/pydata/xarray/issues/3932#issuecomment-610167347 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYxMDE2NzM0Nw== apatlpo 11750960 2020-04-07T04:32:12Z 2020-04-07T04:32:12Z CONTRIBUTOR

I'll close this for now as there doesn't seem to be other ideas about this

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609605285 https://github.com/pydata/xarray/issues/3932#issuecomment-609605285 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTYwNTI4NQ== apatlpo 11750960 2020-04-06T07:08:19Z 2020-04-06T07:08:19Z CONTRIBUTOR

This sounds like method 1 (with dask delayed) to me. There may be no faster option, thanks for giving it a thought @fujiisoup

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609483638 https://github.com/pydata/xarray/issues/3932#issuecomment-609483638 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTQ4MzYzOA== fujiisoup 6815844 2020-04-05T21:09:58Z 2020-04-05T21:09:58Z MEMBER

An inspection of the dask dashboard indicates that the computation is not distributed among workers though. How could I make sure this happens?

Ah, I have no idea... Are you able to distribute the function some_exp without wrapping by xarray?

Within my limited knowledge, it may be better to prepare another function that distributes some_exp over the workers and put this another function into apply_ufunc, but I am not 100% sure. Probably there is a better way...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609407162 https://github.com/pydata/xarray/issues/3932#issuecomment-609407162 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTQwNzE2Mg== apatlpo 11750960 2020-04-05T12:17:15Z 2020-04-05T12:17:47Z CONTRIBUTOR

thanks a lot @fujiisoup, your suggestion does help getting rid of the necessity to build the ds['_y'] variable. Here is the updated apply_ufunc solution: ``` x = np.arange(10100) y = np.arange(20100)

ds = xr.Dataset(coords={'x': x, 'y': y})

ds = ds.chunk({'x': 1, 'y':1}) # does not change anything

let's say each experiment outputs 5 statistical diagnostics

Nstats = 5 some_exp = lambda x, y: np.ones((Nstats,))

out = xr.apply_ufunc(some_exp, ds.x, ds.y, dask='parallelized', vectorize=True, output_dtypes=[float], output_sizes={'stats': Nstats}, output_core_dims=[['stats']]) ``` An inspection of the dask dashboard indicates that the computation is not distributed among workers though. How could I make sure this happens?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609407192 https://github.com/pydata/xarray/issues/3932#issuecomment-609407192 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTQwNzE5Mg== apatlpo 11750960 2020-04-05T12:17:26Z 2020-04-05T12:17:26Z CONTRIBUTOR

sorry closed by accident

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520
609094164 https://github.com/pydata/xarray/issues/3932#issuecomment-609094164 https://api.github.com/repos/pydata/xarray/issues/3932 MDEyOklzc3VlQ29tbWVudDYwOTA5NDE2NA== fujiisoup 6815844 2020-04-04T21:54:41Z 2020-04-04T21:54:56Z MEMBER

Is python xr.apply_ufunc(some_exp, ds.x, ds.y, dask='parallelized', output_dtypes=[float], output_sizes={'stats': Nstats}, output_core_dims=[['stats']], vectorize=True) what you want? This gives ```python

<xarray.DataArray (x: 10, y: 20, stats: 5)> array([[[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], ... [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]]]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 8 9 * y (y) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Dimensions without coordinates: stats

In [26]: ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Element wise dataArray generation 593825520

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.794ms · About: xarray-datasette