home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where issue = 207587161 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • darothen 3
  • max-sixty 2
  • shoyer 1

author_association 2

  • MEMBER 3
  • NONE 3

issue 1

  • GroupBy like API for resample · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
280122805 https://github.com/pydata/xarray/issues/1269#issuecomment-280122805 https://api.github.com/repos/pydata/xarray/issues/1269 MDEyOklzc3VlQ29tbWVudDI4MDEyMjgwNQ== shoyer 1217238 2017-02-15T20:04:07Z 2017-02-15T20:04:07Z MEMBER

I think this could be done with minimal GroupBy subclasses to supply the default dimension argument for aggregation functions. All the machinery on groupby should already be there. On Wed, Feb 15, 2017 at 10:59 AM Daniel Rothenberg notifications@github.com wrote:

@MaximilianR https://github.com/MaximilianR Oh, the interface is easy enough to do, even maintaining backwards-compatibility (already have that working). I was considering going the route done with GroupBy https://github.com/pydata/xarray/blob/93d6963315026f87841c7cf39cc39bb78f555345/xarray/core/groupby.py#L165 and the classes that compose it, like DatasetGroupBy https://github.com/pydata/xarray/blob/93d6963315026f87841c7cf39cc39bb78f555345/xarray/core/groupby.py#L586... basically, we just record the wanted resampling dimension and inject the grouping/resampling operations we want. Also adds the ability to specialize methods like .first() and .last(), which is done under the current implementation.

But.... if there's a simpler way, that might be preferable!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/1269#issuecomment-280104546, or mute the thread https://github.com/notifications/unsubscribe-auth/ABKS1mAUBUkz7ig3fijFmqg6IeDnGgdeks5rc0sJgaJpZM4MAyE5 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy like API for resample 207587161
280104546 https://github.com/pydata/xarray/issues/1269#issuecomment-280104546 https://api.github.com/repos/pydata/xarray/issues/1269 MDEyOklzc3VlQ29tbWVudDI4MDEwNDU0Ng== darothen 4992424 2017-02-15T18:59:17Z 2017-02-15T18:59:17Z NONE

@MaximilianR Oh, the interface is easy enough to do, even maintaining backwards-compatibility (already have that working). I was considering going the route done with GroupBy and the classes that compose it, like DatasetGroupBy... basically, we just record the wanted resampling dimension and inject the grouping/resampling operations we want. Also adds the ability to specialize methods like .first() and .last(), which is done under the current implementation.

But.... if there's a simpler way, that might be preferable!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy like API for resample 207587161
280101839 https://github.com/pydata/xarray/issues/1269#issuecomment-280101839 https://api.github.com/repos/pydata/xarray/issues/1269 MDEyOklzc3VlQ29tbWVudDI4MDEwMTgzOQ== max-sixty 5635139 2017-02-15T18:49:32Z 2017-02-15T18:49:32Z MEMBER

the only sticking point I've come across so far is how to have the resulting Data{Array,set}GroupBy object "remember" the resampling dimension

I think an interface like ds.resample(time='24H').mean() would be much better. We could do that with a wrapper of pd.TimeGrouper that also had a dim field. Or inheritance 😨

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy like API for resample 207587161
280101190 https://github.com/pydata/xarray/issues/1269#issuecomment-280101190 https://api.github.com/repos/pydata/xarray/issues/1269 MDEyOklzc3VlQ29tbWVudDI4MDEwMTE5MA== max-sixty 5635139 2017-02-15T18:47:20Z 2017-02-15T18:47:20Z MEMBER

Would be great to test for these sorts of issues if we redo this: https://github.com/pydata/xarray/issues/1269

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy like API for resample 207587161
279845588 https://github.com/pydata/xarray/issues/1269#issuecomment-279845588 https://api.github.com/repos/pydata/xarray/issues/1269 MDEyOklzc3VlQ29tbWVudDI3OTg0NTU4OA== darothen 4992424 2017-02-14T21:44:11Z 2017-02-14T21:44:11Z NONE

Assuming we want to stick with pd.TimeGrouper under the hood, the only sticking point I've come across so far is how to have the resulting Data{Array,set}GroupBy object "remember" the resampling dimension, e.g. if you have multi-dimensional data and want to compute time means you have to call

python ds.resample(time='24H').mean('time')

or else mean will operate across all dimensions. Any thoughts, @shoyer?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy like API for resample 207587161
279810604 https://github.com/pydata/xarray/issues/1269#issuecomment-279810604 https://api.github.com/repos/pydata/xarray/issues/1269 MDEyOklzc3VlQ29tbWVudDI3OTgxMDYwNA== darothen 4992424 2017-02-14T19:32:01Z 2017-02-14T19:32:01Z NONE

Let me dig into this a bit right now. My analysis project for this afternoon was already going to require digging into pandas' resampling in more depth anyways.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GroupBy like API for resample 207587161

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.487ms · About: xarray-datasette