issue_comments
9 rows where issue = 60303760 and user = 167164 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- pd.Grouper support? · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
78239807 | https://github.com/pydata/xarray/issues/364#issuecomment-78239807 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MjM5ODA3 | naught101 167164 | 2015-03-11T10:38:05Z | 2015-03-11T10:38:05Z | NONE | Ah, yep, making the dimension using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
78211171 | https://github.com/pydata/xarray/issues/364#issuecomment-78211171 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MjExMTcx | naught101 167164 | 2015-03-11T06:17:10Z | 2015-03-11T06:17:10Z | NONE | Ok, weird. That example works for me, but even if I take a really short slice of my data set, the same thing won't work: ``` In [61]: d = data.sel(time=slice('2002-01-01','2002-01-03')) d Out[61]: <xray.Dataset> Dimensions: (time: 143, timeofday: 70128, x: 1, y: 1, z: 1) Coordinates: * x (x) >f8 1.0 * y (y) >f8 1.0 * z (z) >f8 1.0 * time (time) datetime64[ns] 2002-01-01T00:30:00 ... * timeofday (timeofday) timedelta64[ns] 1800000000000 nanoseconds ... Data variables: SWdown (time, y, x) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.58 ... Rainf_qc (time, y, x) float64 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... SWdown_qc (time, y, x) float64 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... Tair (time, z, y, x) float64 282.9 282.9 282.7 282.6 282.4 281.7 281.0 ... Tair_qc (time, y, x) float64 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... LWdown (time, y, x) float64 296.7 297.3 297.3 297.3 297.2 295.9 294.5 ... PSurf_qc (time, y, x) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... latitude (y, x) float64 -35.66 Wind (time, z, y, x) float64 2.2 2.188 1.9 2.2 2.5 2.5 2.5 2.25 2.0 2.35 ... LWdown_qc (time, y, x) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... Rainf (time, y, x) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... Qair_qc (time, y, x) float64 1.0 0.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... longitude (y, x) float64 148.2 PSurf (time, y, x) float64 8.783e+04 8.783e+04 8.782e+04 8.781e+04 ... reference_height (y, x) float64 70.0 elevation (y, x) float64 1.2e+03 Qair (time, z, y, x) float64 0.00448 0.004608 0.004692 0.004781 ... Wind_qc (time, y, x) float64 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ... Attributes: Production_time: 2012-09-27 12:44:42 Production_source: PALS automated netcdf conversion Contact: palshelp@gmail.com PALS_fluxtower_template_version: 1.0.2 PALS_dataset_name: TumbaFluxnet PALS_dataset_version: 1.4 In [62]: d.groupby('timeofday').mean('time') ``` That last command will not complete - it will run for minutes. Not really sure how to debug that behaviour. Perhaps it's to do with the long/lat/height variables that really should be coordinates (I'm just using the data as it came, but I can clean that, if necessary) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
78191526 | https://github.com/pydata/xarray/issues/364#issuecomment-78191526 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MTkxNTI2 | naught101 167164 | 2015-03-11T03:00:03Z | 2015-03-11T03:00:03Z | NONE | same problem with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
78036587 | https://github.com/pydata/xarray/issues/364#issuecomment-78036587 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MDM2NTg3 | naught101 167164 | 2015-03-10T11:30:10Z | 2015-03-10T11:30:10Z | NONE | Dunno if this is related to the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
78008962 | https://github.com/pydata/xarray/issues/364#issuecomment-78008962 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc4MDA4OTYy | naught101 167164 | 2015-03-10T07:51:45Z | 2015-03-10T07:51:45Z | NONE | Nice. Ok, I have hit a stumbling block, and this is much more of a support request, so feel free to direct me else where, but since we're on the topic, I want to do something like:
where The assignment of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77978458 | https://github.com/pydata/xarray/issues/364#issuecomment-77978458 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3OTc4NDU4 | naught101 167164 | 2015-03-10T01:16:25Z | 2015-03-10T01:16:25Z | NONE | Ah, cool, thanks for that link, I missed that in the docs. One thing that would be nice (in both pandas and xray) is a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77824657 | https://github.com/pydata/xarray/issues/364#issuecomment-77824657 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODI0NjU3 | naught101 167164 | 2015-03-09T09:46:15Z | 2015-03-09T09:46:15Z | NONE | Heh, I meant the pandas docs - they don't specify the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77810787 | https://github.com/pydata/xarray/issues/364#issuecomment-77810787 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODEwNzg3 | naught101 167164 | 2015-03-09T07:34:49Z | 2015-03-09T07:34:49Z | NONE | Unfortunately I'm not familiar enough with pd.resample and pd.TeimGrouper to know the difference in what they can do. One thing that I would like to be able to do that is not covered by resample, and might be covered by TimeGrouper is to group over month only (not month and year), in order to create a plot of mean seasonal cycle (at monthly resolution), or similarly, a daily cycle at hourly resolution. I haven't figured out if I can do that with TimeGrouper yet though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
77807590 | https://github.com/pydata/xarray/issues/364#issuecomment-77807590 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDc3ODA3NTkw | naught101 167164 | 2015-03-09T06:49:55Z | 2015-03-09T06:49:55Z | NONE | Looks good to me. I don't know enough to be able to comment on the API question. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1