home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 1332231863 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • keewis · 2 ✖

issue 1

  • Public testing framework for duck array integration · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1216451559 https://github.com/pydata/xarray/issues/6894#issuecomment-1216451559 https://api.github.com/repos/pydata/xarray/issues/6894 IC_kwDOAMm_X85IgZPn keewis 14808389 2022-08-16T10:25:43Z 2022-08-16T10:25:43Z MEMBER

there's also the experimental array api strategies built into hypothesis

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Public testing framework for duck array integration 1332231863
1209144356 https://github.com/pydata/xarray/issues/6894#issuecomment-1209144356 https://api.github.com/repos/pydata/xarray/issues/6894 IC_kwDOAMm_X85IEhQk keewis 14808389 2022-08-09T09:33:07Z 2022-08-09T09:33:07Z MEMBER

with the implementation in #4972 you should already be able to specify a hypothesis strategy to create e.g. a random awkward array. Same with dask or other parallel computing frameworks: if you can construct a hypothesis strategy for them the testing framework should be able to use that. check_reduce (or maybe it should be just check?) should allow customizing the comparison (or actually, that's the entire test code at the moment), so putting compute (or todense / get) calls should be easy.

For setup and teardown I think we could use pytest fixtures (and apply them automatically to each function). However, maybe we should just not use parametrize but instead define separate functions for each reduce operation? Then it would be possible to override that manually. As far as I remember I chose not to do that because tests that only delegate to super().test_function() just are not great design – if we can think of a way to do that while avoiding those kinds of test redefinitions I'd be happy with that (and then we could get rid of the apply_marks function which is a ugly hack of pytest internals).

I agree that moving the array library tests to dedicated repositories makes a lot sense (for example, the pint tests use old versions of the conversion functions from pint-xarray), but note that at the moment the tests for pint seem to increase the total test coverage of xarray a bit. I guess that just means we'd have to improve the rest of the testsuite?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Public testing framework for duck array integration 1332231863

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 165.14ms · About: xarray-datasette