home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "NONE" and user = 74916839 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • robin-cls · 3 ✖

issue 1

  • Means of zarr arrays cause a memory overload in dask workers 3

author_association 1

  • NONE · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1164686511 https://github.com/pydata/xarray/issues/6709#issuecomment-1164686511 https://api.github.com/repos/pydata/xarray/issues/6709 IC_kwDOAMm_X85Fa7Sv robin-cls 74916839 2022-06-23T17:33:48Z 2022-06-23T17:33:48Z NONE

Thanks @gjoseph92 and @dcherian . I'll try the different approaches in the links you have provided to see if I can improve my current solution (I compute the fields separately which means more IO and more operations)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Means of zarr arrays cause a memory overload in dask workers 1277437106
1164064469 https://github.com/pydata/xarray/issues/6709#issuecomment-1164064469 https://api.github.com/repos/pydata/xarray/issues/6709 IC_kwDOAMm_X85FYjbV robin-cls 74916839 2022-06-23T07:40:33Z 2022-06-23T07:43:20Z NONE

Thanks for the tips, I was investigating inline_array=True and still no luck. The graph seems OK though. I can attach it if you want but I think zarr is not the culprit.

Here is why :

In the first case, where we build the array from scratch, the ones array is simple. Dask seems to understand that it does not have to make many copies of it. So when replacing ones with random data, we observe the same behavior as opening the dataset from a ZARR store (high memory usage on a worker) :

```python import dask.array as da import numpy as np import xarray as xr

ds = xr.Dataset( dict( anom_u=(["time", "face", "j", "i"], da.random.random((10311, 1, 987, 1920), chunks=(10, 1, 987, 1920))), anom_v=(["time", "face", "j", "i"], da.random.random((10311, 1, 987, 1920), chunks=(10, 1, 987, 1920))), ) )

ds["anom_uu_mean"] = (["face", "j", "i"], np.mean(ds.anom_u.data2, axis=0)) ds["anom_vv_mean"] = (["face", "j", "i"], np.mean(ds.anom_v.data2, axis=0)) ds["anom_uv_mean"] = (["face", "j", "i"], np.mean(ds.anom_u.data * ds.anom_v.data, axis=0))

ds[["anom_uu_mean", "anom_vv_mean", "anom_uv_mean"]].compute() ```

I think the question now is why Dask must load so many data when doing my operation :

If we take the computation graph (I've put the non optimized version), my understanding is that we could do the following : - Load the first chunk of anom_u - Load the second chunk of anom_v - Do the multiplication anom_uanom_v, anom_u*, anom_v ** 2 - Do the mean-chunk task - Unload all the previous tasks - Redo the same and combine the mean-chunks tasks

For information, one chunk is about 1.4G, so I expect see peaks of 5*1.4 = 7G in memory (plus what's needed to store the mean_chunk), but I instead got 15G+ in my worker, most of it taken by the random-samples

Is my understanding of distributed mean wrong ? Why are the random-sample not flushed ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Means of zarr arrays cause a memory overload in dask workers 1277437106
1162833362 https://github.com/pydata/xarray/issues/6709#issuecomment-1162833362 https://api.github.com/repos/pydata/xarray/issues/6709 IC_kwDOAMm_X85FT23S robin-cls 74916839 2022-06-22T08:54:18Z 2022-06-22T09:10:04Z NONE

Hi @TomNicholas

I've reduced the original dataset to 11 chunks over the time dimension so that we can see the graph properly. I also replaced the .compute operation by a to_zarr(compute=False) because I don't know how to visualize xarray operations without generating a Delayed object (comments are welcomed on this point !)

Anyway here are the files, first one is the graph where the means are built from dask.ones arrays

Second one is the graph where the means are built from the same arrays but opened from a zarr store

I am quite a newbie in dask graphs debug but everything seems ok in the second graph, apart from the open_dataset tasks that are linked to a parent task. Also, I noticed that Dask have fused the 'ones' operations in the first graph. Would it help if I generated another arrays with zeros instead ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Means of zarr arrays cause a memory overload in dask workers 1277437106

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.051ms · About: xarray-datasette