issue_comments
4 rows where author_association = "CONTRIBUTOR", issue = 288184220 and user = 14314623 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- We need a fast path for open_mfdataset · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
531945252 | https://github.com/pydata/xarray/issues/1823#issuecomment-531945252 | https://api.github.com/repos/pydata/xarray/issues/1823 | MDEyOklzc3VlQ29tbWVudDUzMTk0NTI1Mg== | jbusecke 14314623 | 2019-09-16T20:29:35Z | 2019-09-16T20:29:35Z | CONTRIBUTOR | Wooooow. Thanks. Ill have to give this a whirl soon. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We need a fast path for open_mfdataset 288184220 | |
373123959 | https://github.com/pydata/xarray/issues/1823#issuecomment-373123959 | https://api.github.com/repos/pydata/xarray/issues/1823 | MDEyOklzc3VlQ29tbWVudDM3MzEyMzk1OQ== | jbusecke 14314623 | 2018-03-14T18:16:38Z | 2018-03-14T18:16:38Z | CONTRIBUTOR | Awesome, thanks for the clarification. I just looked at #1981 and it seems indeed very elegant (in fact I just now used this approach to parallelize printing of movie frames!) Thanks for that! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We need a fast path for open_mfdataset 288184220 | |
372856076 | https://github.com/pydata/xarray/issues/1823#issuecomment-372856076 | https://api.github.com/repos/pydata/xarray/issues/1823 | MDEyOklzc3VlQ29tbWVudDM3Mjg1NjA3Ng== | jbusecke 14314623 | 2018-03-13T23:40:54Z | 2018-03-13T23:40:54Z | CONTRIBUTOR | Would these two options be necessarily mutually exclusive? I think parallelizing the read in sounds amazing. But isnt there some merit in skipping some of the checks all together, if the user is sure about the structure of the data contained in the many files? I am often working with the aforementioned type of data (many files either contain a new timestep or a different variable, but most of the dimensions/coordinates are the same). In some cases I am finding that reading the data "lazily" consumes a significant amount of the time in my workflow. I am unsure how hard this would be to achieve, and perhaps it is not worth it after all. Just putting out a few ideas, while I wait for my |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We need a fast path for open_mfdataset 288184220 | |
359069753 | https://github.com/pydata/xarray/issues/1823#issuecomment-359069753 | https://api.github.com/repos/pydata/xarray/issues/1823 | MDEyOklzc3VlQ29tbWVudDM1OTA2OTc1Mw== | jbusecke 14314623 | 2018-01-19T19:45:00Z | 2018-01-19T19:45:00Z | CONTRIBUTOR | I did not really find an elegant solution. What I did was just specify all dims and coords as |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
We need a fast path for open_mfdataset 288184220 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1