home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "CONTRIBUTOR" and user = 2552981 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 2

  • Flat iteration over DataArray 2
  • Fix printing summaries of multiindex coords 1

user 1

  • coroa · 3 ✖

author_association 1

  • CONTRIBUTOR · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
520210672 https://github.com/pydata/xarray/pull/3054#issuecomment-520210672 https://api.github.com/repos/pydata/xarray/issues/3054 MDEyOklzc3VlQ29tbWVudDUyMDIxMDY3Mg== coroa 2552981 2019-08-11T08:36:19Z 2019-08-11T08:36:41Z CONTRIBUTOR

@yohai : In short, no. It does not make sense to add a built-in function for iteration, if it is unable to augment the low-level functionality.

I'd recommend closing this PR!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Flat iteration over DataArray 462049420
509145346 https://github.com/pydata/xarray/pull/3079#issuecomment-509145346 https://api.github.com/repos/pydata/xarray/issues/3079 MDEyOklzc3VlQ29tbWVudDUwOTE0NTM0Ng== coroa 2552981 2019-07-08T09:10:54Z 2019-07-08T09:10:54Z CONTRIBUTOR

Ok, rebased on master, added tests. Was not sooo easy to make linter and pytest happy :), but we're there. Hope skipping E501 line too long (... > 79 characters) for the multiline test string is allowed.

https://github.com/pydata/xarray/blob/e4716a4c2b9df681eb9e43dd31a3c9ec6e104e44/xarray/tests/test_dataarray.py#L74-L81

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix printing summaries of multiindex coords 464152437
508407631 https://github.com/pydata/xarray/pull/3054#issuecomment-508407631 https://api.github.com/repos/pydata/xarray/issues/3054 MDEyOklzc3VlQ29tbWVudDUwODQwNzYzMQ== coroa 2552981 2019-07-04T09:15:14Z 2019-07-04T09:15:14Z CONTRIBUTOR

@yohai It's a lot more efficient to simply iterate over the underlying array, ie. da.values.flat, if you can afford to hold everything in memory.

If you are instead using streaming computation based on dask, then you would have to do something similar on per-chunk basis.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Flat iteration over DataArray 462049420

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.594ms · About: xarray-datasette