home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 905848466 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • max-sixty 2
  • thomashirtz 2
  • keewis 1

author_association 2

  • MEMBER 3
  • CONTRIBUTOR 2

issue 1

  • Add type checking when concat · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
851721485 https://github.com/pydata/xarray/pull/5397#issuecomment-851721485 https://api.github.com/repos/pydata/xarray/issues/5397 MDEyOklzc3VlQ29tbWVudDg1MTcyMTQ4NQ== keewis 14808389 2021-05-31T23:58:54Z 2021-05-31T23:58:54Z MEMBER

keeping the full list in memory isn't expensive at all IIUC

I agree, this should not be an issue at all: we're iterating over datasets multiple times so we definitely need it to be a sequence (currently it's some kind of iterator object, e.g. the one returned by dict.keys).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add type checking when concat 905848466
851624047 https://github.com/pydata/xarray/pull/5397#issuecomment-851624047 https://api.github.com/repos/pydata/xarray/issues/5397 MDEyOklzc3VlQ29tbWVudDg1MTYyNDA0Nw== max-sixty 5635139 2021-05-31T18:15:20Z 2021-05-31T18:15:20Z MEMBER

Will it not cripple performance due to the non-lazy evaluation of the list ?

I don't think so, but I'm not sure — keeping the full list in memory isn't expensive at all IIUC. And either way, it'll confirm the reason for the test failures for now.

I also started the test for the concat function, I discovered that if the first element determine between _dataarray_concat and _dataset_concat. I am not sure if I need to do several commits (type checking dataset, type checking data array) and several test.

Here is the draft for the tests:

Those look good! Feel free to add them directly and we can discuss them there, and see the test results. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add type checking when concat 905848466
851618811 https://github.com/pydata/xarray/pull/5397#issuecomment-851618811 https://api.github.com/repos/pydata/xarray/issues/5397 MDEyOklzc3VlQ29tbWVudDg1MTYxODgxMQ== thomashirtz 37740986 2021-05-31T17:59:46Z 2021-05-31T17:59:46Z CONTRIBUTOR

Will it not cripple performance due to the non-lazy evaluation of the list ?

I also started the test for the concat function, I discovered that if the first element determine between _dataarray_concat and _dataset_concat. I am not sure if I need to do several commits (type checking dataset, type checking data array) and several test.

Here is the draft for the tests: ``` def test_concat_check_input_type(): ds = Dataset({"foo": 1}, {"bar": 2}) da = Dataset({"foo": 3}, {"bar": 4}).to_array(dim='foo')

# concatenate a list of non-homogeneous types must raise TypeError
with pytest.raises(TypeError, match="Some elements in the input list datasets are not 'DataSet'"):
    concat([ds, da], dim="foo")

# concatenate a list of non-homogeneous types must raise TypeError
with pytest.raises(TypeError, match="Some elements in the input list datasets are not 'DataArray'"):
    concat([da, ds], dim="foo")

```

Code I plan to add also for the typing check in _dataarray_concat: from .dataarray import DataArray arrays = list(arrays) if not all(isinstance(array, DataArray) for array in list(arrays)): raise TypeError("Some elements in the input list datasets are not 'DataArray'") (I need to add from .dataarray import DataArray to be able to use the type)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add type checking when concat 905848466
850666780 https://github.com/pydata/xarray/pull/5397#issuecomment-850666780 https://api.github.com/repos/pydata/xarray/issues/5397 MDEyOklzc3VlQ29tbWVudDg1MDY2Njc4MA== thomashirtz 37740986 2021-05-28T20:58:53Z 2021-05-28T20:58:53Z CONTRIBUTOR

Ok! I'll do that in the next few days :) I'm trying to get more familiar with the whole procedure

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add type checking when concat 905848466
850662147 https://github.com/pydata/xarray/pull/5397#issuecomment-850662147 https://api.github.com/repos/pydata/xarray/issues/5397 MDEyOklzc3VlQ29tbWVudDg1MDY2MjE0Nw== max-sixty 5635139 2021-05-28T20:47:48Z 2021-05-28T20:47:48Z MEMBER

Hi @thomashirtz ! Thanks for the PR.

The code looks reasonable, though it's generating test failures. To prioritize giving quick feedback over complete feedback — possibly datasets is sometimes a generator. We could add datasets = list(datasets) above the current code. If that doesn't fix it, feel free to take an error and try debugging what's going on, or someone can help.

We should add a test for this too.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add type checking when concat 905848466

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.706ms · About: xarray-datasette