home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 287223508 and user = 8881170 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • bradyrx · 4 ✖

issue 1

  • apply_ufunc(dask='parallelized') with multiple outputs · 4 ✖

author_association 1

  • CONTRIBUTOR 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
628135082 https://github.com/pydata/xarray/issues/1815#issuecomment-628135082 https://api.github.com/repos/pydata/xarray/issues/1815 MDEyOklzc3VlQ29tbWVudDYyODEzNTA4Mg== bradyrx 8881170 2020-05-13T17:27:06Z 2020-05-13T17:27:06Z CONTRIBUTOR

So would you be re-doing the same computation by running .compute() separately on these objects?

Yes. but you can do dask.compute(xarray_obj1, xarray_obj2,...) or combine those objects appropriately into a Dataset and then call compute on that.

Good call. I figured there was a workaround.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') with multiple outputs 287223508
628070696 https://github.com/pydata/xarray/issues/1815#issuecomment-628070696 https://api.github.com/repos/pydata/xarray/issues/1815 MDEyOklzc3VlQ29tbWVudDYyODA3MDY5Ng== bradyrx 8881170 2020-05-13T15:33:56Z 2020-05-13T15:33:56Z CONTRIBUTOR

One issue I see is that this would return multiple dask objects, correct? So to get the results from them, you'd have to run .compute() on each separately. I think it's a valid assumption to expect that the multiple output objects would share a lot of the same computational pipeline. So would you be re-doing the same computation by running .compute() separately on these objects?

The earlier mentioned code snippets provide a nice path forward, since you can just run compute on one object, and then split its result (or however you name it) dimension into multiple individual objects. Thoughts?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') with multiple outputs 287223508
614244205 https://github.com/pydata/xarray/issues/1815#issuecomment-614244205 https://api.github.com/repos/pydata/xarray/issues/1815 MDEyOklzc3VlQ29tbWVudDYxNDI0NDIwNQ== bradyrx 8881170 2020-04-15T19:45:50Z 2020-04-15T19:45:50Z CONTRIBUTOR

I think ideally it would be nice to return multiple DataArrays or a Dataset of variables. But I'm really happy with this solution. I'm using it on a 600GB dataset of particle trajectories and was able to write a ufunc to go through and return each particle's x, y, z location when it met a certain condition.

I think having something simple like the stackoverflow snippet I posted would be great for the docs as an apply_ufunc example. I'd be happy to lead this if folks think it's a good idea.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') with multiple outputs 287223508
614216243 https://github.com/pydata/xarray/issues/1815#issuecomment-614216243 https://api.github.com/repos/pydata/xarray/issues/1815 MDEyOklzc3VlQ29tbWVudDYxNDIxNjI0Mw== bradyrx 8881170 2020-04-15T18:49:51Z 2020-04-15T18:49:51Z CONTRIBUTOR

This looks essentially the same to @stefraynaud's answer, but I came across this stackoverflow response here: https://stackoverflow.com/questions/52094320/with-xarray-how-to-parallelize-1d-operations-on-a-multidimensional-dataset.

@andersy005, I imagine you're far past this now. And this might have been related to discussions with Genevieve and I anyways.

```python def new_linregress(x, y): # Wrapper around scipy linregress to use in apply_ufunc slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) return np.array([slope, intercept, r_value, p_value, std_err])

return a new DataArray

stats = xr.apply_ufunc(new_linregress, ds[x], ds[y], input_core_dims=[['year'], ['year']], output_core_dims=[["parameter"]], vectorize=True, dask="parallelized", output_dtypes=['float64'], output_sizes={"parameter": 5}, ) ```

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 3,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc(dask='parallelized') with multiple outputs 287223508

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.181ms · About: xarray-datasette