issue_comments
3 rows where user = 4762711 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
user 1
- zbarry · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
524895731 | https://github.com/pydata/xarray/issues/1471#issuecomment-524895731 | https://api.github.com/repos/pydata/xarray/issues/1471 | MDEyOklzc3VlQ29tbWVudDUyNDg5NTczMQ== | zbarry 4762711 | 2019-08-26T15:00:35Z | 2019-08-26T15:00:35Z | NONE | I just wanted to chime in as to the usefulness of being able to do something like this without the extra mental overhead being required by the workaround proposed. My use case parallels @smartass101's very closely. Have there been any updates to xarray since last year that might make streamlining this use case a bit more feasible, by any chance? :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
sharing dimensions across dataarrays in a dataset 241290234 | |
504061610 | https://github.com/pydata/xarray/issues/2940#issuecomment-504061610 | https://api.github.com/repos/pydata/xarray/issues/2940 | MDEyOklzc3VlQ29tbWVudDUwNDA2MTYxMA== | zbarry 4762711 | 2019-06-20T15:03:22Z | 2019-06-20T15:03:22Z | NONE | @shoyer - yeah, I'd tend to agree. I had to do a bit of digging when I encountered this problem, because I was getting serialization errors (since it was doing away with the chunks on massive arrays) that I then eventually connected to this |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
test_rolling_wrapped_dask is failing with dask-master 440233667 | |
503171241 | https://github.com/pydata/xarray/pull/2942#issuecomment-503171241 | https://api.github.com/repos/pydata/xarray/issues/2942 | MDEyOklzc3VlQ29tbWVudDUwMzE3MTI0MQ== | zbarry 4762711 | 2019-06-18T14:51:06Z | 2019-06-19T17:33:21Z | NONE | Hey just wanted to chime in and say it appears that commit https://github.com/fujiisoup/xarray/commit/098daf3020e817d03a17a81963ad3cf831fbb48c is still losing chunking for me as far as I can tell when running it with dask distributed / dask jobqueue. I can do some extended testing & look further into this if people have some suggestions for how to go about it. @fujiisoup |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix rolling operation with dask and bottleneck 440900618 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3