home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

10 rows where issue = 818059250 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • keewis · 10 ✖

issue 1

  • Automatic duck array testing - reductions · 10 ✖

author_association 1

  • MEMBER 10
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1216441506 https://github.com/pydata/xarray/pull/4972#issuecomment-1216441506 https://api.github.com/repos/pydata/xarray/issues/4972 IC_kwDOAMm_X85IgWyi keewis 14808389 2022-08-16T10:16:30Z 2022-08-16T10:16:30Z MEMBER

We might just want to wait to merge this before merging that though anyway.

I actually think it should be the other way around: if we can get the strategies from #6908 to shrink well, we might be able to fix the occasional test timeouts here (which should be one of the final issues we have before we can merge this).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
1208349577 https://github.com/pydata/xarray/pull/4972#issuecomment-1208349577 https://api.github.com/repos/pydata/xarray/issues/4972 IC_kwDOAMm_X85IBfOJ keewis 14808389 2022-08-08T16:33:43Z 2022-08-08T17:00:26Z MEMBER

I started with reduce because that seemed to be the easiest to check, but I guess I was wrong? In any case, checking that the wrapped data is indeed the expected should be very easy to add (not this PR, though, that has stalled long enough).

The idea is to do something like pandas's ExtensionArray test suite that exposes a set of base classes that can be inherited from. So to test duck array support in a downstream library, you'd inherit from the appropriate classes and override create (which returns a strategy for the tested array) and check_* to check properties of the expected result (I think that's what you're asking for in 4).

It definitely doesn't feel very polished at the moment, but hopefully we can figure out a way to fix that (and then we can also hopefully figure out a good way to document this).

Edit: also, let's discuss the general plans in a new issue

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
1208285136 https://github.com/pydata/xarray/pull/4972#issuecomment-1208285136 https://api.github.com/repos/pydata/xarray/issues/4972 IC_kwDOAMm_X85IBPfQ keewis 14808389 2022-08-08T15:35:05Z 2022-08-08T15:35:27Z MEMBER

xref pydata/sparse#555

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
1203850549 https://github.com/pydata/xarray/pull/4972#issuecomment-1203850549 https://api.github.com/repos/pydata/xarray/issues/4972 IC_kwDOAMm_X85HwU01 keewis 14808389 2022-08-03T11:53:22Z 2022-08-03T11:53:22Z MEMBER

the remaining failures for pint are #6822.

I also wonder whether we should have a separate job for hypothesis: this makes CI run quite a bit longer (about 70s on my machine just for sparse and pint reduce tests, which are a small part of the API)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
899105279 https://github.com/pydata/xarray/pull/4972#issuecomment-899105279 https://api.github.com/repos/pydata/xarray/issues/4972 IC_kwDOAMm_X841l0H_ keewis 14808389 2021-08-15T20:25:08Z 2021-08-15T20:25:08Z MEMBER

The pint tests pass so all that's left is to figure out how to fix the sparse tests.

sparse seems to have different dtype casting behavior (it casts e.g. a mean of float16 to float64, while numpy stays at float16), so I'll need to figure out how to work around that. There's also a TypeError, which I'll have to investigate.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
825834926 https://github.com/pydata/xarray/pull/4972#issuecomment-825834926 https://api.github.com/repos/pydata/xarray/issues/4972 MDEyOklzc3VlQ29tbWVudDgyNTgzNDkyNg== keewis 14808389 2021-04-23T18:15:33Z 2021-04-23T18:15:33Z MEMBER

ping @shoyer, I'm mostly done with cleaning up the code.

@Zac-HD, could I ask for another review? I think I applied most of your suggestions but I'm sure I missed something.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
816994501 https://github.com/pydata/xarray/pull/4972#issuecomment-816994501 https://api.github.com/repos/pydata/xarray/issues/4972 MDEyOklzc3VlQ29tbWVudDgxNjk5NDUwMQ== keewis 14808389 2021-04-09T22:01:01Z 2021-04-09T22:01:44Z MEMBER

@shoyer, I finally found pandas' extension array test suite again, which divides the tests into groups and provides base classes for each of them. It then uses fixtures to parametrize those tests, but I think we can use hypothesis strategies instead.

This will most probably be a little bit easier to implement and maintain, and allow much more control, although parametrize marks are still stripped so the tests will be a bit more verbose.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
808739976 https://github.com/pydata/xarray/pull/4972#issuecomment-808739976 https://api.github.com/repos/pydata/xarray/issues/4972 MDEyOklzc3VlQ29tbWVudDgwODczOTk3Ng== keewis 14808389 2021-03-27T14:16:40Z 2021-03-28T20:36:09Z MEMBER

For more context, the main idea is to provide a generic and cheap way to check the compatibility of any duckarray (including nested duckarrays) with xarray's interface. I settled on having a function construct a test class based on callbacks and marks, but I'm not sure this is the best way (it is the best I can come up with, however).

I'm considering hypothesis (hence the separate branch) because I was thinking that it might be easier to go through with the whole "have a function accepting some callbacks and marks generate the tests"-thing if instead of possibly parametrized callbacks I used hypothesis' strategies. There are still some potential issues left, though, most importantly I'm trying to find a generic way to compute the expected values or expected failures.

Thanks for hints 2 and 3, I'm still not quite used to the concepts of hypothesis. As for 4, xarray has parameters which are intended for use in label-based indexing (e.g. with sel), so I'm trying to find a way to mark those to be able to tell them from other types of parameters (like position-based indexers).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
808414458 https://github.com/pydata/xarray/pull/4972#issuecomment-808414458 https://api.github.com/repos/pydata/xarray/issues/4972 MDEyOklzc3VlQ29tbWVudDgwODQxNDQ1OA== keewis 14808389 2021-03-26T18:00:27Z 2021-03-26T18:54:23Z MEMBER

@Zac-HD, would you have any advice for something like this?

Edit: this is a initial draft using hypothesis

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250
800366523 https://github.com/pydata/xarray/pull/4972#issuecomment-800366523 https://api.github.com/repos/pydata/xarray/issues/4972 MDEyOklzc3VlQ29tbWVudDgwMDM2NjUyMw== keewis 14808389 2021-03-16T15:32:08Z 2021-03-26T18:00:22Z MEMBER

@shoyer: I asked about this in pytest-dev/pytest#8450, but it seems we are on our own here. The recommendation was to use inheritance, but that currently breaks parametrize (see the pad tests in test_variable).

I wonder if using hypothesis would make this easier? It would probably require a bit more reading because I don't yet fully understand how to choose the properties but dynamic parametrization would not be necessary: we would pass strategies instead of creation functions and potentially find more bugs, at the cost of longer test runs.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Automatic duck array testing - reductions 818059250

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 76.017ms · About: xarray-datasette