home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 761270240 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • keewis · 3 ✖

issue 1

  • CI setup: use mamba and matplotlib-base · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
743924206 https://github.com/pydata/xarray/pull/4672#issuecomment-743924206 https://api.github.com/repos/pydata/xarray/issues/4672 MDEyOklzc3VlQ29tbWVudDc0MzkyNDIwNg== keewis 14808389 2020-12-13T00:13:02Z 2020-12-13T17:09:36Z MEMBER

locally on windows I find no large difference between numba 0.51 and 0.52

that's really strange. Why do we see that much of a speed-up once we downgrade numba on azure pipelines?

there are about 750 xfailed tests in test_units.py

yes, I will have to update those. I think until the index refactor we can safely skip all tests that rely on units in indexes, which should improve the situation, and there might also be a few tests that were fixed by pint.

Edit: see #4685

What is the difference between pytest.mark.xfail and pytest.xfail?

I think pytest.mark.xfail is the official way to decorate test functions while pytest.xfail can be used in the function body to programmatically mark the test as expected failure (which allows more control than the mark)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CI setup: use mamba and matplotlib-base 761270240
743488518 https://github.com/pydata/xarray/pull/4672#issuecomment-743488518 https://api.github.com/repos/pydata/xarray/issues/4672 MDEyOklzc3VlQ29tbWVudDc0MzQ4ODUxOA== keewis 14808389 2020-12-12T00:01:42Z 2020-12-12T00:01:42Z MEMBER

pinning numba seems to have fixed the issue. It definitely is important to speed up our CI, though, waiting more than 30 minutes for the CI to finish is really not ideal.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CI setup: use mamba and matplotlib-base 761270240
743462860 https://github.com/pydata/xarray/pull/4672#issuecomment-743462860 https://api.github.com/repos/pydata/xarray/issues/4672 MDEyOklzc3VlQ29tbWVudDc0MzQ2Mjg2MA== keewis 14808389 2020-12-11T22:35:55Z 2020-12-11T22:40:25Z MEMBER

I'm a bit confused, the windows CI used to take about as long as the macOS CI to complete. The last run for which that was true was about a week ago, does anyone know what changed since then?

Edit: maybe because of the release of numba=0.52.0 to conda-forge? If so, could you try pinning numba?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CI setup: use mamba and matplotlib-base 761270240

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 121.304ms · About: xarray-datasette