home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where issue = 589886157 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • keewis · 7 ✖

issue 1

  • add a CI that tests xarray with all optional dependencies but dask · 7 ✖

author_association 1

  • MEMBER 7
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
606276634 https://github.com/pydata/xarray/pull/3919#issuecomment-606276634 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNjI3NjYzNA== keewis 14808389 2020-03-30T22:07:12Z 2020-03-30T22:29:43Z MEMBER

I don't think I can figure out how to fix the remaining three tests. Is it okay to have the fixes for open_zarr and ZarrArrayWrapper.__getitem__ in this PR?

If it is, this should be ready for review & merge.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157
606267473 https://github.com/pydata/xarray/pull/3919#issuecomment-606267473 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNjI2NzQ3Mw== keewis 14808389 2020-03-30T21:43:35Z 2020-03-30T21:43:35Z MEMBER

it seems the slices get transformed differently depending on the data: for numpy arrays, slice(-1, 1, -1) is transformed into slice(9, 1, -1) where for dask arrays slice(-1, 1, -1) becomes something without a negative step (not sure how that looks, though).

I'd xfail these tests since I think they're bugs and open issues for them.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157
606135343 https://github.com/pydata/xarray/pull/3919#issuecomment-606135343 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNjEzNTM0Mw== keewis 14808389 2020-03-30T17:28:28Z 2020-03-30T17:41:50Z MEMBER

a bit more progress: https://github.com/pydata/xarray/blob/fa37ee6d5a3c53f50dff61caebb342b89e2e35d5/xarray/backends/zarr.py#L58-L60 _arrayize_vectorized_indexer accepts an indexer, so we should pass key directly.

This exposes a different error: plain zarr does not seem to support negative step sizes (zarr+dask seem fine with it, though).

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157
606114972 https://github.com/pydata/xarray/pull/3919#issuecomment-606114972 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNjExNDk3Mg== keewis 14808389 2020-03-30T16:49:40Z 2020-03-30T16:51:34Z MEMBER

the non-serializable lock from the failing rasterio test is from https://github.com/pydata/xarray/blob/280a14ff5298c83fcee23e1e18e3a37397856ea9/xarray/backends/locks.py#L6-L10 which means the assumption that if we don't use dask we don't need to worry about serialization is incorrect (or the rasterio code uses the serializable lock in unintended ways)?

Also, the chunks parameter of open_zarr defaults to "auto", meaning it tries to chunk by default, even if dask is not available. Should we check the availability of dask and overwrite chunks="auto" with chunks=None if we can't import dask? This would reduce the number of failing zarr tests from 98 to 2. Edit: see fa37ee6

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157
605714954 https://github.com/pydata/xarray/pull/3919#issuecomment-605714954 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNTcxNDk1NA== keewis 14808389 2020-03-29T22:55:40Z 2020-03-29T23:13:31Z MEMBER

it is related, but this PR has a different set of failures: there are a lot of zarr related failures, two rasterio and one pseudonetcdf failure and also a sparse chunking failure. The sparse test and one of the rasterio tests obviously need to be decorated with requires_dask, but I don't know about the others.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157
605709693 https://github.com/pydata/xarray/pull/3919#issuecomment-605709693 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNTcwOTY5Mw== keewis 14808389 2020-03-29T22:10:46Z 2020-03-29T22:11:18Z MEMBER

seems iris was the culprit that pulled in dask so I removed it.

I also renamed the CI to py38-all-but-dask but I'm not really confident in my naming abilities. If someone has a better idea I'd be happy to use that instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157
605707756 https://github.com/pydata/xarray/pull/3919#issuecomment-605707756 https://api.github.com/repos/pydata/xarray/issues/3919 MDEyOklzc3VlQ29tbWVudDYwNTcwNzc1Ng== keewis 14808389 2020-03-29T21:54:17Z 2020-03-29T22:07:13Z MEMBER

yes, exactly. There were a few issues we did not detect because there are code paths that are only chosen if dask is not available at all. See #3794 for a list of a few of these issues.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add a CI that tests xarray with all optional dependencies but dask 589886157

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 70.497ms · About: xarray-datasette