issue_comments
5 rows where issue = 579722569 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Backend env · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1371755247 | https://github.com/pydata/xarray/pull/3858#issuecomment-1371755247 | https://api.github.com/repos/pydata/xarray/issues/3858 | IC_kwDOAMm_X85Rw1Lv | jhamman 2443309 | 2023-01-05T03:58:54Z | 2023-01-05T03:58:54Z | MEMBER | I believe this can be closed now. Pynio is on the way out and it seems like we were leaning away from including this anyway. @pgierz - please feel free to reopen if I have that wrong. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Backend env 579722569 | |
598577015 | https://github.com/pydata/xarray/pull/3858#issuecomment-598577015 | https://api.github.com/repos/pydata/xarray/issues/3858 | MDEyOklzc3VlQ29tbWVudDU5ODU3NzAxNQ== | shoyer 1217238 | 2020-03-13T06:49:05Z | 2020-03-13T06:49:05Z | MEMBER | If I think that would be the ideal resolution, but if I recall PyNIO isn't under active development anymore. So in that case, we could consider adding our own solution in xarray to add a new backend argument, but it should be specific PyNIO, not all xarray backends, i.e., it should live entirely in To make this work robustly with all of xarray's file caching machinery (used with dask, etc), the setup of the environment variable needs to happen inside a helper function wrapping I think you could do something similar with overriding environment variables here inside this helper function. Ideally you could also do clean-up here, deleting these environment variables, but I'm not sure if there's an easy/safe way to do this currently. These methods can get called in multiple threads (e.g., from dask) and I don't think we have a global clean-up mechanism that would work for this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Backend env 579722569 | |
598028629 | https://github.com/pydata/xarray/pull/3858#issuecomment-598028629 | https://api.github.com/repos/pydata/xarray/issues/3858 | MDEyOklzc3VlQ29tbWVudDU5ODAyODYyOQ== | pep8speaks 24736507 | 2020-03-12T06:30:57Z | 2020-03-13T06:30:00Z | NONE | Hello @pgierz! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found: There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers: Comment last updated at 2020-03-13 06:29:59 UTC |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Backend env 579722569 | |
598569859 | https://github.com/pydata/xarray/pull/3858#issuecomment-598569859 | https://api.github.com/repos/pydata/xarray/issues/3858 | MDEyOklzc3VlQ29tbWVudDU5ODU2OTg1OQ== | pgierz 2444231 | 2020-03-13T06:21:35Z | 2020-03-13T06:25:43Z | NONE | Yes, I agree. Having a library only depend on the environment is less than optimal to say the least. The main reason behind this was to allow one of the GRB backends to read custom tables. I've already contacted both the cfgrib and PyNIO people to see about changing this to something more flexible. Regarding writing my own utility: I'm afraid this introduces one more hoop for end users to jump through. If I can tell my colleagues "Hey just use Xarray", that normally works pretty well. If I now need to say "Hey, use my own little thing plus Xarray plus ...." (who knows what else) might introduce more friction. However, it's an interesting idea: How challenging would it be to write my own Xarray backend (In this case for GRB files) and include that instead? Would Xarray be open to including a further backend? That would separate the logic into it's own project and if anything breaks, then the Xarray team would have less to worry about and can just refer issues to the backend development team (so, I guess me 😉)... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Backend env 579722569 | |
598558696 | https://github.com/pydata/xarray/pull/3858#issuecomment-598558696 | https://api.github.com/repos/pydata/xarray/issues/3858 | MDEyOklzc3VlQ29tbWVudDU5ODU1ODY5Ng== | shoyer 1217238 | 2020-03-13T05:38:46Z | 2020-03-13T05:38:46Z | MEMBER | Thanks for putting together this pull request! My main concern here is that setting environment variables feels pretty decoupled from the logic of What do you think about writing your own utility for this sort of thing? e.g., based on one of these examples from StackOverflow: https://stackoverflow.com/questions/2059482/python-temporarily-modify-the-current-processs-environment |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Backend env 579722569 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4