home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 67332234 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • shoyer 2
  • mathause 2

issue 1

  • "loosing" virtual variables · 4 ✖

author_association 1

  • MEMBER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
91679470 https://github.com/pydata/xarray/issues/386#issuecomment-91679470 https://api.github.com/repos/pydata/xarray/issues/386 MDEyOklzc3VlQ29tbWVudDkxNjc5NDcw shoyer 1217238 2015-04-10T20:37:43Z 2015-04-10T20:37:43Z MEMBER

Resampling actually supports custom aggregation methods with the how parameter: http://xray.readthedocs.org/en/stable/generated/xray.Dataset.resample.html#xray.Dataset.resample

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "loosing" virtual variables 67332234
91495357 https://github.com/pydata/xarray/issues/386#issuecomment-91495357 https://api.github.com/repos/pydata/xarray/issues/386 MDEyOklzc3VlQ29tbWVudDkxNDk1MzU3 mathause 10194086 2015-04-10T09:40:36Z 2015-04-10T09:40:36Z MEMBER

Thanks, it works. I didn't think of this.

However, for anything else than the mean this would not work? So I would opt for the second option.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "loosing" virtual variables 67332234
91355259 https://github.com/pydata/xarray/issues/386#issuecomment-91355259 https://api.github.com/repos/pydata/xarray/issues/386 MDEyOklzc3VlQ29tbWVudDkxMzU1MjU5 shoyer 1217238 2015-04-09T21:01:04Z 2015-04-09T21:01:04Z MEMBER

The better solution here is to use resample rather than groupby('time.date'):

ts_mean = ts.resample('1D', dim='time')

time.date and time.time (with is currently broken) are weird virtual variables, because, like you point out, they create object arrays. That makes them not so useful. It might be better to get rid of them entirely, or to replace them with some sort of shortcut that returns datetime64 instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "loosing" virtual variables 67332234
91219325 https://github.com/pydata/xarray/issues/386#issuecomment-91219325 https://api.github.com/repos/pydata/xarray/issues/386 MDEyOklzc3VlQ29tbWVudDkxMjE5MzI1 mathause 10194086 2015-04-09T12:44:16Z 2015-04-09T12:44:16Z MEMBER

Ok the problem seems to be that the datetime64 array is converted to an object array.

print(ts) print(ts_mean)

Could you keep it as a datetime64 object? Or would that be a pandas problem?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "loosing" virtual variables 67332234

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.947ms · About: xarray-datasette