issue_comments
6 rows where issue = 187872991 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Convert xarray dataset to dask dataframe or delayed objects · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
857330304 | https://github.com/pydata/xarray/issues/1093#issuecomment-857330304 | https://api.github.com/repos/pydata/xarray/issues/1093 | MDEyOklzc3VlQ29tbWVudDg1NzMzMDMwNA== | stale[bot] 26384082 | 2021-06-09T02:50:46Z | 2021-06-09T02:50:46Z | NONE | In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here or remove the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Convert xarray dataset to dask dataframe or delayed objects 187872991 | |
509773034 | https://github.com/pydata/xarray/issues/1093#issuecomment-509773034 | https://api.github.com/repos/pydata/xarray/issues/1093 | MDEyOklzc3VlQ29tbWVudDUwOTc3MzAzNA== | dcherian 2448579 | 2019-07-09T19:20:51Z | 2019-07-09T19:20:51Z | MEMBER | I think this was closed by mistake. Is there a way to split up Dataset chunks into dask delayed objects where each object is a Dataset? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Convert xarray dataset to dask dataframe or delayed objects 187872991 | |
259213382 | https://github.com/pydata/xarray/issues/1093#issuecomment-259213382 | https://api.github.com/repos/pydata/xarray/issues/1093 | MDEyOklzc3VlQ29tbWVudDI1OTIxMzM4Mg== | shoyer 1217238 | 2016-11-08T18:09:11Z | 2016-11-08T18:09:34Z | MEMBER | The other component that would help for this is some utility function inside xarray to split a
|
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
Convert xarray dataset to dask dataframe or delayed objects 187872991 | |
259207151 | https://github.com/pydata/xarray/issues/1093#issuecomment-259207151 | https://api.github.com/repos/pydata/xarray/issues/1093 | MDEyOklzc3VlQ29tbWVudDI1OTIwNzE1MQ== | shoyer 1217238 | 2016-11-08T17:46:23Z | 2016-11-08T17:46:23Z | MEMBER |
Then we could use xarray's normal indexing operations to create a new sub-datasets, wrap them with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Convert xarray dataset to dask dataframe or delayed objects 187872991 | |
259204793 | https://github.com/pydata/xarray/issues/1093#issuecomment-259204793 | https://api.github.com/repos/pydata/xarray/issues/1093 | MDEyOklzc3VlQ29tbWVudDI1OTIwNDc5Mw== | jcrist 2783717 | 2016-11-08T17:37:25Z | 2016-11-08T17:37:25Z | NONE | I'm not sure if I follow how this is a duck typing use case. I'd write this as a method, following your suggestion on SO:
Can you explain why you think this could benefit from collection duck typing? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Convert xarray dataset to dask dataframe or delayed objects 187872991 | |
259052436 | https://github.com/pydata/xarray/issues/1093#issuecomment-259052436 | https://api.github.com/repos/pydata/xarray/issues/1093 | MDEyOklzc3VlQ29tbWVudDI1OTA1MjQzNg== | shoyer 1217238 | 2016-11-08T05:55:19Z | 2016-11-08T05:55:19Z | MEMBER | CC @mrocklin @jcrist This is a good use case for dask collection duck typing: https://github.com/dask/dask/pull/1068 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Convert xarray dataset to dask dataframe or delayed objects 187872991 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4