issue_comments
3 rows where issue = 148903579 and user = 12307589 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- keep_attrs for Dataset.resample and DataArray.resample · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
211630275 | https://github.com/pydata/xarray/pull/829#issuecomment-211630275 | https://api.github.com/repos/pydata/xarray/issues/829 | MDEyOklzc3VlQ29tbWVudDIxMTYzMDI3NQ== | mcgibbon 12307589 | 2016-04-18T23:31:26Z | 2016-04-18T23:31:26Z | CONTRIBUTOR | Appveyor build failed for some reason when trying to set up Miniconda on Windows 32-bit with Python 2.7. The 64-bit build of Python 3.4 passed.
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
keep_attrs for Dataset.resample and DataArray.resample 148903579 | |
211541244 | https://github.com/pydata/xarray/pull/829#issuecomment-211541244 | https://api.github.com/repos/pydata/xarray/issues/829 | MDEyOklzc3VlQ29tbWVudDIxMTU0MTI0NA== | mcgibbon 12307589 | 2016-04-18T19:27:27Z | 2016-04-18T19:27:27Z | CONTRIBUTOR | @shoyer I've done the clean-ups you suggested, apart from for-looping tests for the reasons I mentioned in the line note. I hope my "what's new" additions are appropriate. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
keep_attrs for Dataset.resample and DataArray.resample 148903579 | |
210955565 | https://github.com/pydata/xarray/pull/829#issuecomment-210955565 | https://api.github.com/repos/pydata/xarray/issues/829 | MDEyOklzc3VlQ29tbWVudDIxMDk1NTU2NQ== | mcgibbon 12307589 | 2016-04-17T04:45:53Z | 2016-04-17T04:46:16Z | CONTRIBUTOR | The idea is that if a dataset has an attribute, it is making a claim about that data. xarray can't guarantee that claims the attributes make about the data remain valid after operating on that data, so it shouldn't retain those attributes unless the user says it can. I may have ceilometer data that tells me whether a cloud base is detected at any point in time, with an attribute saying that 0 means no cloud detected, another attribute saying that 1 means cloud detected, and another saying that nan means some kind of error. If I resample that data using a mean or median, those attributes are no longer valid. Or my Dataset may have an attribute saying that it was output by a certain instrument. If I save that Dataset after doing some analysis, it may give the impression to someone reading the netCDF that they're reading unprocessed instrument data, when they aren't. Or I may want the hourly variance of a dataset, and do dataset.resample('1H', how='var'). In this case, the units are no longer valid. It may seem like these are edge cases, but it's better to make no claims most of the time than to make bad claims some of the time. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
keep_attrs for Dataset.resample and DataArray.resample 148903579 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1