home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 1336119080 and user = 12229877 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • Zac-HD · 5 ✖

issue 1

  • Hypothesis strategies in xarray.testing.strategies · 5 ✖

author_association 1

  • CONTRIBUTOR 5
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1251922767 https://github.com/pydata/xarray/pull/6908#issuecomment-1251922767 https://api.github.com/repos/pydata/xarray/issues/6908 IC_kwDOAMm_X85KntNP Zac-HD 12229877 2022-09-20T06:59:10Z 2022-09-20T06:59:10Z CONTRIBUTOR

@Zac-HD if I could request one more review please! The two remaining problems for me are:

Absolutely! Some quick comments this evening; I would also like to do a full review again before merge but that might be next week or weekend - I'm out for a conference from early Thursday.

  1. How should we alter the API of datasets to make it easier to construct Dataset objects containing duck-typed arrays (see Hypothesis strategies in xarray.testing.strategies #6908 (comment))

Replied in the thread above.

  1. Why does the example generation performance seem to have gotten worse? 😕 I added what I thought were small refactors (e.g. _sizes_from_dim_names) which may somehow be related, but I'm not sure.

I'd be pretty surprised if that was related, st.fixed_dictionaries() is internally basically just that zip() trick anyway. I'd guess that this is mostly "as you implement the last few complicated data-gen options, they start taking nonzero time", but not confident in that without reading closely and probably measuring some perf things.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hypothesis strategies in xarray.testing.strategies 1336119080
1238674803 https://github.com/pydata/xarray/pull/6908#issuecomment-1238674803 https://api.github.com/repos/pydata/xarray/issues/6908 IC_kwDOAMm_X85J1K1z Zac-HD 12229877 2022-09-06T21:37:07Z 2022-09-06T21:37:07Z CONTRIBUTOR

(generally staying unsubbed; so please ping me whenever you've got questions or would like another review!)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hypothesis strategies in xarray.testing.strategies 1336119080
1218435300 https://github.com/pydata/xarray/pull/6908#issuecomment-1218435300 https://api.github.com/repos/pydata/xarray/issues/6908 IC_kwDOAMm_X85In9jk Zac-HD 12229877 2022-08-17T19:59:55Z 2022-08-17T19:59:55Z CONTRIBUTOR

(unsubbing for noise, please @-me when you'd like another review!)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hypothesis strategies in xarray.testing.strategies 1336119080
1217524909 https://github.com/pydata/xarray/pull/6908#issuecomment-1217524909 https://api.github.com/repos/pydata/xarray/issues/6908 IC_kwDOAMm_X85IkfSt Zac-HD 12229877 2022-08-17T06:39:42Z 2022-08-17T06:39:42Z CONTRIBUTOR

OK, reviewed ✅

Overall it looks pretty good, but there are a couple of places where the API you've got is pushing you into some really nasty performance traps where you have to use rejection sampling to keep things consistent. We have some tricks to help, but it's fundamentally exponential scaling - which means that it works right up until it doesn't work at all, and most of your users are likely to hit that regime 😕 Definitely possible to fix that, but it's more like a redesign than patching a typo.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hypothesis strategies in xarray.testing.strategies 1336119080
1216920806 https://github.com/pydata/xarray/pull/6908#issuecomment-1216920806 https://api.github.com/repos/pydata/xarray/issues/6908 IC_kwDOAMm_X85IiLzm Zac-HD 12229877 2022-08-16T17:12:40Z 2022-08-16T17:12:40Z CONTRIBUTOR

@Zac-HD would you be able to give your hypothesis-expert opinions on the general design of these strategies? Especially the whole "strategies that accept multiple other strategies" thing.

I'll aim for a proper review tonight! Quick remarks: strategies accepting strategies is fine, though our API style guide suggests accepting strategies xor values (this is violated in a few places in our Numpy extra, but it's still a good base principle). The guides might have other useful advice, I'd recommend skimming them since you're planning to ship Hypothesis-extending code to quite a few people!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Hypothesis strategies in xarray.testing.strategies 1336119080

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 31.765ms · About: xarray-datasette