issue_comments
3 rows where issue = 169274464 and user = 12307589 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Consider how to deal with the proliferation of decoder options on open_dataset · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
300647473 | https://github.com/pydata/xarray/issues/939#issuecomment-300647473 | https://api.github.com/repos/pydata/xarray/issues/939 | MDEyOklzc3VlQ29tbWVudDMwMDY0NzQ3Mw== | mcgibbon 12307589 | 2017-05-11T00:16:34Z | 2017-05-11T00:16:34Z | CONTRIBUTOR | It is considered poor software design to have 13 arguments in Java and other languages which do not have optional arguments. The same isn't necessarily true of Python, but I haven't seen much discussion or writing on this. I'd much rather have pandas.read_csv the way it is right now than to have a ReadOptions object that would need to contain exactly the same documentation and be just as hard to understand as read_csv. That object would serve only to separate the documentation of the settings for read_csv from the docstring for read_csv. If you really want to cut down on arguments, open_dataset should be separated into multiple functions. I wouldn't necessarily encourage these, but some possibilities are:
All of that aside, the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consider how to deal with the proliferation of decoder options on open_dataset 169274464 | |
300640372 | https://github.com/pydata/xarray/issues/939#issuecomment-300640372 | https://api.github.com/repos/pydata/xarray/issues/939 | MDEyOklzc3VlQ29tbWVudDMwMDY0MDM3Mg== | mcgibbon 12307589 | 2017-05-10T23:26:57Z | 2017-05-10T23:26:57Z | CONTRIBUTOR | I would disagree with the form What do you mean when you say it's easier to do error checking on field names and values? The xarray implementation can still use fields instead of a dictionary, with the user saying |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consider how to deal with the proliferation of decoder options on open_dataset 169274464 | |
237664856 | https://github.com/pydata/xarray/issues/939#issuecomment-237664856 | https://api.github.com/repos/pydata/xarray/issues/939 | MDEyOklzc3VlQ29tbWVudDIzNzY2NDg1Ng== | mcgibbon 12307589 | 2016-08-04T19:55:10Z | 2016-08-04T19:55:10Z | CONTRIBUTOR | We already have the dictionary. Users can make a decode_options dictionary, and then call what they want to with **decode_options. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Consider how to deal with the proliferation of decoder options on open_dataset 169274464 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1