home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 84127296 and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • dcherian · 3 ✖

issue 1

  • add average function · 3 ✖

author_association 1

  • MEMBER 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
485456780 https://github.com/pydata/xarray/issues/422#issuecomment-485456780 https://api.github.com/repos/pydata/xarray/issues/422 MDEyOklzc3VlQ29tbWVudDQ4NTQ1Njc4MA== dcherian 2448579 2019-04-22T15:52:15Z 2019-04-22T15:52:15Z MEMBER

With regard to the implementation, I thought of orienting myself along the lines of groupby, rolling or resample. Or are there any concerns for this specific method?

I would do the same i.e. take inspiration from the groupby / rolling / resample modules.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add average function 84127296
483715005 https://github.com/pydata/xarray/issues/422#issuecomment-483715005 https://api.github.com/repos/pydata/xarray/issues/422 MDEyOklzc3VlQ29tbWVudDQ4MzcxNTAwNQ== dcherian 2448579 2019-04-16T15:37:37Z 2019-04-16T15:37:37Z MEMBER

@pgierz take a look at the "good first issue" label: https://github.com/pydata/xarray/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add average function 84127296
482737161 https://github.com/pydata/xarray/issues/422#issuecomment-482737161 https://api.github.com/repos/pydata/xarray/issues/422 MDEyOklzc3VlQ29tbWVudDQ4MjczNzE2MQ== dcherian 2448579 2019-04-12T22:03:27Z 2019-04-12T22:03:27Z MEMBER

I think we should maybe build in a warning that when the weights array does not contain both of the average dimensions?

hmm.. the intent here would be that the weights are broadcasted against the input array no? Not sure that a warning is required. e.g. @shoyer's comment above:

I would suggest not using keyword arguments for weighted. Instead, just align based on the labels of the argument like regular xarray operations. So we'd write da.weighted(days_per_month(da.time)).mean()

Are we going to require that the argument to weighted is a DataArray that shares at least one dimension with da?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add average function 84127296

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 553.083ms · About: xarray-datasette