home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 850276619

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/pull/5390#issuecomment-850276619 https://api.github.com/repos/pydata/xarray/issues/5390 850276619 MDEyOklzc3VlQ29tbWVudDg1MDI3NjYxOQ== 56925856 2021-05-28T09:15:30Z 2021-05-28T09:17:48Z CONTRIBUTOR

@willirath , thanks for your example notebook! I'm still trying to get my head around this a bit though.

Say you have da_a and da_b defined as:

```python3 da_a = xr.DataArray( np.array([[1, 2, 3, 4], [1, 0.1, 0.2, 0.3], [2, 3.2, 0.6, 1.8]]), dims=("space", "time"), coords=[ ("space", ["IA", "IL", "IN"]), ("time", pd.date_range("2000-01-01", freq="1D", periods=4)), ], ).chunk()

da_b = xr.DataArray( np.array([[0.2, 0.4, 0.6, 2], [15, 10, 5, 1], [1, 3.2, np.nan, 1.8]]), dims=("space", "time"), coords=[ ("space", ["IA", "IL", "IN"]), ("time", pd.date_range("2000-01-01", freq="1D", periods=4)), ], ).chunk() ```

The original computation in _cov_corr has a graph something like:

Whereas my alteration now has a graph more like this:

Am I correct in thinking that this is a 'better' computational graph? Because the original chunks are not passed onto later points in the computation?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  904153867
Powered by Datasette · Queries took 0.614ms · About: xarray-datasette