home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER", issue = 703881154 and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • keewis · 2 ✖

issue 1

  • Fix optimize for chunked DataArray · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
694855220 https://github.com/pydata/xarray/pull/4432#issuecomment-694855220 https://api.github.com/repos/pydata/xarray/issues/4432 MDEyOklzc3VlQ29tbWVudDY5NDg1NTIyMA== keewis 14808389 2020-09-18T13:04:40Z 2020-09-18T13:20:45Z MEMBER

~it did, the first failing commit is https://github.com/pydata/xarray/pull/4432/commits/381aaf8cc37502907506011f8cb9f4149e229d2d~

Edit: sorry, you're right, the first commit should have also failed. Not sure why that happened, and we can't really check because the build logs were already deleted.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix optimize for chunked DataArray 703881154
694567822 https://github.com/pydata/xarray/pull/4432#issuecomment-694567822 https://api.github.com/repos/pydata/xarray/issues/4432 MDEyOklzc3VlQ29tbWVudDY5NDU2NzgyMg== keewis 14808389 2020-09-18T00:11:46Z 2020-09-18T00:11:46Z MEMBER

with the merge we have a test failure: ``` _____ test_persist_Dataset[<lambda>1] ______

persist = <function \<lambda> at 0x7fe8d1646048>

@pytest.mark.parametrize(
    "persist", [lambda x: x.persist(), lambda x: dask.persist(x)[0]]
)
def test_persist_Dataset(persist):
    ds = Dataset({"foo": ("x", range(5)), "bar": ("x", range(5))}).chunk()
    ds = ds + 1
    n = len(ds.foo.data.dask)

    ds2 = persist(ds)
  assert len(ds2.foo.data.dask) == 1

E AssertionError: assert 2 == 1 E + where 2 = len(<dask.highlevelgraph.HighLevelGraph object at 0x7fe8c45f4518>) E + where <dask.highlevelgraph.HighLevelGraph object at 0x7fe8c45f4518> = dask.array<add, shape=(5,), dtype=int64, chunksize=(5,), chunktype=numpy.ndarray>.dask E + where dask.array<add, shape=(5,), dtype=int64, chunksize=(5,), chunktype=numpy.ndarray> = <xarray.DataArray 'foo' (x: 5)>\ndask.array<add, shape=(5,), dtype=int64, chunksize=(5,), chunktype=numpy.ndarray>\nDimensions without coordinates: x.data E + where <xarray.DataArray 'foo' (x: 5)>\ndask.array<add, shape=(5,), dtype=int64, chunksize=(5,), chunktype=numpy.ndarray>\nDimensions without coordinates: x = <xarray.Dataset>\nDimensions: (x: 5)\nDimensions without coordinates: x\nData variables:\n foo (x) int64 dask.array<chunksize=(5,), meta=np.ndarray>\n bar (x) int64 dask.array<chunksize=(5,), meta=np.ndarray>.foo

/home/vsts/work/1/s/xarray/tests/test_dask.py:943: AssertionError ``` does anyone know why that happens?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix optimize for chunked DataArray 703881154

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 80.123ms · About: xarray-datasette