issue_comments
2 rows where author_association = "MEMBER" and issue = 92762200 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- min/max errors if data variables have string or unicode type · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
118447451 | https://github.com/pydata/xarray/issues/453#issuecomment-118447451 | https://api.github.com/repos/pydata/xarray/issues/453 | MDEyOklzc3VlQ29tbWVudDExODQ0NzQ1MQ== | shoyer 1217238 | 2015-07-04T01:09:10Z | 2015-07-04T01:09:10Z | MEMBER | The reason for not using numeric only for max/min is that they should be well defined even for strings and dates -- unlike aggregations like mean, sum, variance (actually, in principle most should be able to work OK for dates but the numpy codes has some bugs we would need to work around). . The bytes handling in to_datetime is arguably a pandas bug. Alternatively we could decode character arrays from netcdf as unicode instead of bytes, but I'm not sure that's unambiguously the right thing to do. This is a place where the legacy Python 2 distinction of strings/unicode is a closer match for netcdf (and scientific file formats more generally) than the Python 3 behavior. On Fri, Jul 3, 2015 at 4:40 PM, Will Holmgren notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
min/max errors if data variables have string or unicode type 92762200 | |
118211474 | https://github.com/pydata/xarray/issues/453#issuecomment-118211474 | https://api.github.com/repos/pydata/xarray/issues/453 | MDEyOklzc3VlQ29tbWVudDExODIxMTQ3NA== | shoyer 1217238 | 2015-07-03T02:13:01Z | 2015-07-03T02:13:01Z | MEMBER | I agree, it's not friendly to give an error message here. Something you could do about this -- you probably want to convert your times into the numpy
You also probably want to make this Or in one line:
Something xray do could about this -- we could convert string/unicode arrays into the numpy object dtype prior to attempting operations like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
min/max errors if data variables have string or unicode type 92762200 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1