home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 384004189 and user = 10050469 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • fmaussion · 3 ✖

issue 1

  • CF: also decode time bounds when available · 3 ✖

author_association 1

  • MEMBER 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
447687361 https://github.com/pydata/xarray/pull/2571#issuecomment-447687361 https://api.github.com/repos/pydata/xarray/issues/2571 MDEyOklzc3VlQ29tbWVudDQ0NzY4NzM2MQ== fmaussion 10050469 2018-12-16T23:40:32Z 2018-12-16T23:40:32Z MEMBER

+1 for putting this in "Breaking changes"

Sorry for being so slow. I just done it, will merge tomorrow once the tests pass.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF: also decode time bounds when available 384004189
443420763 https://github.com/pydata/xarray/pull/2571#issuecomment-443420763 https://api.github.com/repos/pydata/xarray/issues/2571 MDEyOklzc3VlQ29tbWVudDQ0MzQyMDc2Mw== fmaussion 10050469 2018-12-01T11:51:50Z 2018-12-01T11:51:50Z MEMBER

Thanks! Made the change as requested. Regarding the general design:

With this implementation, if one decodes the times, and then saves things back out to a file, a "time bounds" variable will contain units and calendar attributes even though it might not have them initially. I'm not sure if this is a big deal (in this case it is simply adding redundant metadata), but I just wanted to bring it up in case it was a concern.

I agree - I don't think it is a big deal either. It must also be noted that this might break some code in downstream libraries (including mine) which has built workarounds for this and will now error because the bounds are already decoded. Here also, I'm quite confident that this is an edge case but its worth mentioning. Should I put my what's new in "Breaking changes" instead?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF: also decode time bounds when available 384004189
443026008 https://github.com/pydata/xarray/pull/2571#issuecomment-443026008 https://api.github.com/repos/pydata/xarray/issues/2571 MDEyOklzc3VlQ29tbWVudDQ0MzAyNjAwOA== fmaussion 10050469 2018-11-29T23:00:10Z 2018-12-01T11:45:09Z MEMBER

I addressed all comments except the fixtures, which seemed a bit overkill (but I simplified the tests). Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF: also decode time bounds when available 384004189

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 45.518ms · About: xarray-datasette