home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "NONE" and issue = 374025325 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • pl-marasco 4
  • roxyboy 1
  • cerodell 1

issue 1

  • Array indexing with dask arrays · 6 ✖

author_association 1

  • NONE · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
935769790 https://github.com/pydata/xarray/issues/2511#issuecomment-935769790 https://api.github.com/repos/pydata/xarray/issues/2511 IC_kwDOAMm_X843xra- pl-marasco 22492773 2021-10-06T08:47:24Z 2021-10-06T08:47:24Z NONE

@bzah I've been testing your code and I can confirm the increment of timing once the .compute() isn't in use. I've noticed that using your modification, seems that dask array is computed more than one time per sample. I've made some tests using a modified version from #3237 and here are my observations:

Assuming that we have only one sample object after the resample the expected result should be 1 compute and that's what we obtain if we call the computation before the .argmax() If .compute() is removed then I got 3 total computations. Just as a confirmation if you increase the sample you will get a multiple of 3 as a result of computes.

I still don't know the reason and if is correct or not but sounds weird to me; though it could explain the time increase.

@dcherian @shyer do you know if all this make any sense? should the .isel() automatically trig the computation or should give back a lazy array?

Here is the code I've been using (works only adding the modification proposed by @bzah)

``` import numpy as np import dask import xarray as xr

class Scheduler: """ From: https://stackoverflow.com/questions/53289286/ """

def __init__(self, max_computes=20):
    self.max_computes = max_computes
    self.total_computes = 0

def __call__(self, dsk, keys, **kwargs):
    self.total_computes += 1
    if self.total_computes > self.max_computes:
        raise RuntimeError(
            "Too many dask computations were scheduled: {}".format(
                self.total_computes
            )
        )
    return dask.get(dsk, keys, **kwargs)

scheduler = Scheduler()

with dask.config.set(scheduler=scheduler):

COORDS = dict(dim_0=pd.date_range("2042-01-01", periods=31, freq='D'),
              dim_1= range(0,500),
              dim_2= range(0,500))

da = xr.DataArray(np.random.rand(31 * 500 * 500).reshape((31, 500, 500)),
                  coords=COORDS).chunk(dict(dim_0=-1, dim_1=100, dim_2=100))

print(da)

resampled = da.resample(dim_0="MS")

for label, sample in resampled:

    #sample = sample.compute()
    idx = sample.argmax('dim_0')
    sampled = sample.isel(dim_0=idx)

print("Total number of computes: %d" % scheduler.total_computes)

```

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
932582053 https://github.com/pydata/xarray/issues/2511#issuecomment-932582053 https://api.github.com/repos/pydata/xarray/issues/2511 IC_kwDOAMm_X843lhKl cerodell 38116316 2021-10-01T21:18:53Z 2021-10-01T21:20:49Z NONE

Hello! First off thank you for all the hard work on xarray! Use it every day and love it :)

I am also having issues indexing with dask arrays and get the following error.

``` Traceback (most recent call last): File "~/phd-comps/scripts/sfire-pbl.py", line 64, in <module> PBLH = height.isel(gradT2.argmax(dim=['interp_level'])) File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/dataarray.py", line 1184, in isel indexers, drop=drop, missing_dims=missing_dims File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/dataset.py", line 2389, in _isel_fancy new_var = var.isel(indexers=var_indexers) File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/variable.py", line 1156, in isel return self[key] File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/variable.py", line 776, in getitem dims, indexer, new_order = self._broadcast_indexes(key) File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/variable.py", line 632, in _broadcast_indexes return self._broadcast_indexes_vectorized(key) File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/variable.py", line 761, in _broadcast_indexes_vectorized return out_dims, VectorizedIndexer(tuple(out_key)), new_order File "~/miniconda3/envs/cr/lib/python3.7/site-packages/xarray/core/indexing.py", line 323, in init f"unexpected indexer type for {type(self).name}: {k!r}" TypeError: unexpected indexer type for VectorizedIndexer: dask.array<getitem, shape=(240, 399, 159), dtype=int64, chunksize=(60, 133, 53), chunktype=numpy.ndarray>

dask 2021.9.1 pyhd8ed1ab_0 conda-forge xarray 0.19.0 pyhd8ed1ab_0 conda-forge ```

In order to get it to work, I first need to manually call compute to load to NumPy array before using argmax with isel. Not sure what info I can provide to help solve the issue please let me know and ill send whatever I can.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
932169790 https://github.com/pydata/xarray/issues/2511#issuecomment-932169790 https://api.github.com/repos/pydata/xarray/issues/2511 IC_kwDOAMm_X843j8g- pl-marasco 22492773 2021-10-01T12:04:55Z 2021-10-01T12:04:55Z NONE

@bzah I tested your patch with the following code:

``` import xarray as xr from distributed import Client client = Client()

da = xr.DataArray(np.random.rand(2035003500).reshape((20,3500,3500)), dims=('time', 'x', 'y')).chunk(dict(time=-1, x=100, y=100))

idx = da.argmax('time').compute() da.isel(time=idx) ```

In my case seems that with or without it takes the same time but I would like to know if is the same for you.

L.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
930309991 https://github.com/pydata/xarray/issues/2511#issuecomment-930309991 https://api.github.com/repos/pydata/xarray/issues/2511 IC_kwDOAMm_X843c2dn pl-marasco 22492773 2021-09-29T15:56:33Z 2021-09-29T15:56:33Z NONE

@pl-marasco Ok that's strange. I should have saved my use case :/ I will try to reproduce it and will provide a gist of it soon.

What I noticed, on my use case, is that it provoke a computation. Is that the reason for what you consider slow? Could be possible that is related to #3237 ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
930124657 https://github.com/pydata/xarray/issues/2511#issuecomment-930124657 https://api.github.com/repos/pydata/xarray/issues/2511 IC_kwDOAMm_X843cJNx pl-marasco 22492773 2021-09-29T12:22:06Z 2021-09-29T12:22:06Z NONE

@bzah I've been testing your solution and doesn't seems to slow as you are mentioning. Do you have a specific test to be conducted so that we can make a more robust comparison?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
567966648 https://github.com/pydata/xarray/issues/2511#issuecomment-567966648 https://api.github.com/repos/pydata/xarray/issues/2511 MDEyOklzc3VlQ29tbWVudDU2Nzk2NjY0OA== roxyboy 8934026 2019-12-20T15:37:09Z 2019-12-20T15:39:10Z NONE

I'm just curious if there's been any progress on this issue. I'm also getting the same error: TypeError: unexpected indexer type for VectorizedIndexer and I would greatly benefit from lazy vectorized indexing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 719.694ms · About: xarray-datasette