issue_comments
3 rows where user = 13770365 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
user 1
- ngreenwald · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
592051771 | https://github.com/pydata/xarray/issues/3785#issuecomment-592051771 | https://api.github.com/repos/pydata/xarray/issues/3785 | MDEyOklzc3VlQ29tbWVudDU5MjA1MTc3MQ== | ngreenwald 13770365 | 2020-02-27T16:30:36Z | 2020-02-27T16:30:36Z | NONE | Hey all, just wanted to follow up to see if anyone had suggestions for how to approach this. We're running into this issue when we load an xarray, make some changes to it, and then save the modified version. Especially for interactive sessions, this produces bugs that are challenging to track down, as it appears that the previous step has failed, not that the actual loading is the problem. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_dataarray(cache=False) still uses cached version of dataarray 568705055 | |
547577441 | https://github.com/pydata/xarray/issues/3458#issuecomment-547577441 | https://api.github.com/repos/pydata/xarray/issues/3458 | MDEyOklzc3VlQ29tbWVudDU0NzU3NzQ0MQ== | ngreenwald 13770365 | 2019-10-29T18:54:42Z | 2019-10-29T18:54:42Z | NONE | Great, thanks so much! On Tue, Oct 29, 2019 at 11:20 AM Deepak Cherian notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Keep index dimension when selecting only a single coord 514077742 | |
544183597 | https://github.com/pydata/xarray/issues/3416#issuecomment-544183597 | https://api.github.com/repos/pydata/xarray/issues/3416 | MDEyOklzc3VlQ29tbWVudDU0NDE4MzU5Nw== | ngreenwald 13770365 | 2019-10-19T18:19:08Z | 2019-10-19T18:19:33Z | NONE | Yes, you're right @max-sixty, seems like the actual IO is happening within scipy. Do you have any suggestions for how I might troubleshoot this further? Could I make a similar example that uses only the scipy netcdf functionality somehow to drill down into where the memory error is coming from? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot write array larger than 4GB with SciPy netCDF backend 509285415 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3