home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where issue = 591101988 and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, updated_at (date)

user 1

  • dcherian · 7 ✖

issue 1

  • FIX: correct dask array handling in _calc_idxminmax · 7 ✖

author_association 1

  • MEMBER 7
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
626181186 https://github.com/pydata/xarray/pull/3922#issuecomment-626181186 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYyNjE4MTE4Ng== dcherian 2448579 2020-05-09T14:04:23Z 2020-05-09T14:04:56Z MEMBER

The test fails for object arrays because we compute eagerly in nanops._nan_argminmax_object to raise an error for all-NaN slices.

To solve this we could 1. fix #3884 so that nanargmin, nanargmax never raise an error for all-NaN slices. 2. Figure out some clever way to raise the error at compute time rather than graph construction time.

For now, I bumped up max_computes to 1 for object arrays.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988
615935096 https://github.com/pydata/xarray/pull/3922#issuecomment-615935096 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYxNTkzNTA5Ng== dcherian 2448579 2020-04-18T19:51:32Z 2020-04-18T19:55:01Z MEMBER

The compute error is from here: https://github.com/pydata/xarray/blob/6a6f2c8748464c89a61dbdbc9636bce78a965369/xarray/core/nanops.py#L48-L60

I think we'll have to rethink the skipna conditions for dask arrays so that the compute doesn't happen.

Or figure out why we do this check in the first place. hmm...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988
615932590 https://github.com/pydata/xarray/pull/3922#issuecomment-615932590 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYxNTkzMjU5MA== dcherian 2448579 2020-04-18T19:32:24Z 2020-04-18T19:32:24Z MEMBER

Yeah interestingly we don't raise an error when trying to chunk IndexVariables.

I've pushed a commit where we extract the underlying numpy array, chunk that, index it and then wrap it up in a DataArray o_O.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988
614043968 https://github.com/pydata/xarray/pull/3922#issuecomment-614043968 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYxNDA0Mzk2OA== dcherian 2448579 2020-04-15T13:34:22Z 2020-04-15T13:34:22Z MEMBER

Not a problem. Thanks for working on this!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988
614030919 https://github.com/pydata/xarray/pull/3922#issuecomment-614030919 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYxNDAzMDkxOQ== dcherian 2448579 2020-04-15T13:11:00Z 2020-04-15T13:11:00Z MEMBER

@kmuehlbauer I've pushed a commit adding the decorator to just the 2D test_idxmax. The decorator does nothing for numpy so all the numpy tests pass but the dask tests fail now (because it is indeed computing things). Prior to this commit the 2D tests were passing, so we should go back to the map_blocks solution I guess.

because you can't index a NumPy array with a dask array?

I think @shoyer is right here.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988
613555599 https://github.com/pydata/xarray/pull/3922#issuecomment-613555599 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYxMzU1NTU5OQ== dcherian 2448579 2020-04-14T16:49:26Z 2020-04-14T16:49:26Z MEMBER

I suggest updating the tests before reverting anything. This solution may work ...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988
613546215 https://github.com/pydata/xarray/pull/3922#issuecomment-613546215 https://api.github.com/repos/pydata/xarray/issues/3922 MDEyOklzc3VlQ29tbWVudDYxMzU0NjIxNQ== dcherian 2448579 2020-04-14T16:31:24Z 2020-04-14T16:31:24Z MEMBER

If your tests are passing now, it's likely that they're computing things to make things work. We should add the with raise_if_dask_computes() context (imported from test_dask.py) when testing with dask variables but it looks non-trivial given how the tests are structured at the moment.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  FIX: correct dask array handling in _calc_idxminmax 591101988

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2196.627ms · About: xarray-datasette