issue_comments
10 rows where author_association = "MEMBER" and issue = 60303760 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- pd.Grouper support? · 10 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
347792648 | https://github.com/pydata/xarray/issues/364#issuecomment-347792648 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDM0Nzc5MjY0OA== | shoyer 1217238 | 2017-11-29T08:51:19Z | 2017-11-29T08:51:19Z | MEMBER | Well, the functionality is still there, it's just recommended that you use pd.Grouper. On Wed, Nov 29, 2017 at 2:47 AM lexual notifications@github.com wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
341614052 | https://github.com/pydata/xarray/issues/364#issuecomment-341614052 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDM0MTYxNDA1Mg== | shoyer 1217238 | 2017-11-03T03:13:14Z | 2017-11-03T03:13:14Z | MEMBER | Have you tried iterating over a resample object in the v0.10 release candidate? I believe the new resample API supports iteration. On Thu, Nov 2, 2017 at 5:40 PM hazbottles notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
230935548 | https://github.com/pydata/xarray/issues/364#issuecomment-230935548 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDIzMDkzNTU0OA== | shoyer 1217238 | 2016-07-06T23:15:27Z | 2016-07-06T23:15:56Z | MEMBER | @saulomeirelles Nope, this hasn't been added yet, beyond what you can do with the current |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
78214797 | https://github.com/pydata/xarray/issues/364#issuecomment-78214797 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MjE0Nzk3 | shoyer 1217238 | 2015-03-11T07:06:57Z | 2015-03-11T07:06:57Z | MEMBER | The problem is that you've created a new
Also, unlike pandas, xray currently does the core loop for all groupby operations in pure Python, which means that yes, it will be slow when you have a very large number of groups (and it loops again to handle your 15 different variables). Using something like Cython or Numba to speedup groupby operations is on my to-do list, but I've found this to be less of a barrier than you might expect for multi-dimensional datasets -- individual group members tend to include more elements than in DataFrames. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
78192774 | https://github.com/pydata/xarray/issues/364#issuecomment-78192774 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MTkyNzc0 | shoyer 1217238 | 2015-03-11T03:08:31Z | 2015-03-11T03:08:31Z | MEMBER | I don't think the timeofday issue is related to using Timedeltas in the index (and it's certainly not related to the Here's an example that seems to be working properly (except for uselessly display timedeltas in nanoseconds): ``` In [29]: time = pd.date_range('2000-01-01', freq='H', periods=100) In [30]: daystart = time.to_period(freq='1D').to_datetime() In [31]: timeofday = time.values - daystart.values In [32]: ds = xray.Dataset({'data': ('time', range(100))}, {'time': time, 'timeofday': ('time', timeofday)}) In [33]: ds Out[33]: <xray.Dataset> Dimensions: (time: 100) Coordinates: timeofday (time) timedelta64[ns] 0 nanoseconds ... * time (time) datetime64[ns] 2000-01-01 2000-01-01T01:00:00 ... Data variables: data (time) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ... In [34]: ds.groupby('timeofday').mean('time') Out[34]: <xray.Dataset> Dimensions: (timeofday: 24) Coordinates: * timeofday (timeofday) timedelta64[ns] 0 nanoseconds ... Data variables: data (timeofday) float64 48.0 49.0 50.0 51.0 40.0 41.0 42.0 43.0 44.0 45.0 46.0 ... ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77984506 | https://github.com/pydata/xarray/issues/364#issuecomment-77984506 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3OTg0NTA2 | shoyer 1217238 | 2015-03-10T02:21:37Z | 2015-03-10T02:21:37Z | MEMBER | Hmm. However, it should work in pandas -- you can do ``` In [13]: t = pd.date_range('2000-01-01', periods=10000, freq='H') In [14]: t.time Out[14]: array([datetime.time(0, 0), datetime.time(1, 0), datetime.time(2, 0), ..., datetime.time(13, 0), datetime.time(14, 0), datetime.time(15, 0)], dtype=object) ``` The simplest way to do timeofday, though, is probably just to calculate |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77898592 | https://github.com/pydata/xarray/issues/364#issuecomment-77898592 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODk4NTky | shoyer 1217238 | 2015-03-09T17:16:14Z | 2015-03-09T17:16:14Z | MEMBER | For pandas resample, see here: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#up-and-downsampling The doc string could definitely use an update there, too -- see https://github.com/pydata/pandas/issues/5023 (I think I'll try to update this, too) For I'm going to consolidate all the time/date functionality into a new documentation page for the next release of xray, since this is kind of all over the place now. Also, I should probably break up that monolithic page on "Data structures", perhaps into "Basics" and "Advanced" pages. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77818399 | https://github.com/pydata/xarray/issues/364#issuecomment-77818399 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODE4Mzk5 | shoyer 1217238 | 2015-03-09T08:54:34Z | 2015-03-09T08:54:34Z | MEMBER | Indeed, I need to complete the For your other use case, you just want to group by |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77808372 | https://github.com/pydata/xarray/issues/364#issuecomment-77808372 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODA4Mzcy | shoyer 1217238 | 2015-03-09T07:01:11Z | 2015-03-09T07:01:11Z | MEMBER | Well, I guess the first question is -- are there uses for TimeGrouper that you can't easily do with resample? I suppose the simplest (no new method) would be to allow passing a dict where the key is the time dimension and the value is the grouper. Something like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77806318 | https://github.com/pydata/xarray/issues/364#issuecomment-77806318 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODA2MzE4 | shoyer 1217238 | 2015-03-09T06:31:14Z | 2015-03-09T06:31:28Z | MEMBER | I wrote a resample function last week based on TimeGrouper. See the dev docs for more details: http://xray.readthedocs.org/en/latest/whats-new.html This should go out in the 0.4.1 release, which I'd like to get out later this week (everyone likes faster release cycles if they are backwards compatible). It would be pretty straightforward to create some sort of API that gives direct access to the resulting GroupBy object. I was considering something like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1