issue_comments
10 rows where author_association = "CONTRIBUTOR" and user = 1050278 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- jjhelmus · 10 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
449523093 | https://github.com/pydata/xarray/pull/2589#issuecomment-449523093 | https://api.github.com/repos/pydata/xarray/issues/2589 | MDEyOklzc3VlQ29tbWVudDQ0OTUyMzA5Mw== | jjhelmus 1050278 | 2018-12-21T23:30:14Z | 2018-12-21T23:30:14Z | CONTRIBUTOR |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
added some logic to deal with rasterio objects in addition to filepaths 387123860 | |
448746048 | https://github.com/pydata/xarray/pull/2589#issuecomment-448746048 | https://api.github.com/repos/pydata/xarray/issues/2589 | MDEyOklzc3VlQ29tbWVudDQ0ODc0NjA0OA== | jjhelmus 1050278 | 2018-12-19T21:14:18Z | 2018-12-19T21:14:18Z | CONTRIBUTOR |
Opening and issue in the anaconda-issues repository is the best option at this time for requesting a package update. I'm looking at updating the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
added some logic to deal with rasterio objects in addition to filepaths 387123860 | |
432788686 | https://github.com/pydata/xarray/issues/2503#issuecomment-432788686 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjc4ODY4Ng== | jjhelmus 1050278 | 2018-10-24T19:04:44Z | 2018-10-24T19:04:44Z | CONTRIBUTOR | h10edf3e_1 contains the timeout fix and is build against hdf5 1.10.2. The |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
432783232 | https://github.com/pydata/xarray/issues/2503#issuecomment-432783232 | https://api.github.com/repos/pydata/xarray/issues/2503 | MDEyOklzc3VlQ29tbWVudDQzMjc4MzIzMg== | jjhelmus 1050278 | 2018-10-24T18:50:48Z | 2018-10-24T18:50:48Z | CONTRIBUTOR | In |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems with distributed and opendap netCDF endpoint 373121666 | |
292260729 | https://github.com/pydata/xarray/issues/1353#issuecomment-292260729 | https://api.github.com/repos/pydata/xarray/issues/1353 | MDEyOklzc3VlQ29tbWVudDI5MjI2MDcyOQ== | jjhelmus 1050278 | 2017-04-06T18:12:18Z | 2017-04-06T18:12:18Z | CONTRIBUTOR | @shoyer Agreed, there is no need for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot import xarray.tests due to use of pytest.config 219611498 | |
292195182 | https://github.com/pydata/xarray/issues/1353#issuecomment-292195182 | https://api.github.com/repos/pydata/xarray/issues/1353 | MDEyOklzc3VlQ29tbWVudDI5MjE5NTE4Mg== | jjhelmus 1050278 | 2017-04-06T14:38:57Z | 2017-04-06T14:38:57Z | CONTRIBUTOR |
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot import xarray.tests due to use of pytest.config 219611498 | |
291901003 | https://github.com/pydata/xarray/issues/1353#issuecomment-291901003 | https://api.github.com/repos/pydata/xarray/issues/1353 | MDEyOklzc3VlQ29tbWVudDI5MTkwMTAwMw== | jjhelmus 1050278 | 2017-04-05T15:36:00Z | 2017-04-05T15:36:00Z | CONTRIBUTOR | pytest-dev/pytest#472 from BitBucket #472 seems to touch on this topic and references the note found at the end of the skip and xfail documentation:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot import xarray.tests due to use of pytest.config 219611498 | |
123870841 | https://github.com/pydata/xarray/pull/487#issuecomment-123870841 | https://api.github.com/repos/pydata/xarray/issues/487 | MDEyOklzc3VlQ29tbWVudDEyMzg3MDg0MQ== | jjhelmus 1050278 | 2015-07-22T21:25:46Z | 2015-07-22T21:25:46Z | CONTRIBUTOR | I think those commits should cover all the comments, let me know if I missed any or if this need further refinement. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow for multiple values in missing_value or _FillValue 96421438 | |
122332344 | https://github.com/pydata/xarray/issues/471#issuecomment-122332344 | https://api.github.com/repos/pydata/xarray/issues/471 | MDEyOklzc3VlQ29tbWVudDEyMjMzMjM0NA== | jjhelmus 1050278 | 2015-07-17T16:23:34Z | 2015-07-17T16:23:34Z | CONTRIBUTOR | Sounds like a good solution. I'll work on a PR. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read NetCDF files with multiple values in missing_value attribute 94966000 | |
122294746 | https://github.com/pydata/xarray/issues/471#issuecomment-122294746 | https://api.github.com/repos/pydata/xarray/issues/471 | MDEyOklzc3VlQ29tbWVudDEyMjI5NDc0Ng== | jjhelmus 1050278 | 2015-07-17T14:30:15Z | 2015-07-17T14:30:15Z | CONTRIBUTOR | Yes, the two values in the missing_value attribute indicate two classes of data (not collected vs below minimum detectable threshold) and these data can be access by setting mask_and_scale=False but this also results in a valid data being returned without scaling which makes it less useful. My question is how should xray should handle these cases? Either replace all instances of the values in missing_value with NaN or raise a error message stating that multiple missing_values are not supported similar? I'd be happy to create a PR implementing either case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Read NetCDF files with multiple values in missing_value attribute 94966000 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 5