issue_comments
5 rows where author_association = "MEMBER" and issue = 377075253 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Stop loading tutorial data by default · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
435738466 | https://github.com/pydata/xarray/pull/2538#issuecomment-435738466 | https://api.github.com/repos/pydata/xarray/issues/2538 | MDEyOklzc3VlQ29tbWVudDQzNTczODQ2Ng== | jhamman 2443309 | 2018-11-05T02:39:50Z | 2018-11-05T02:39:50Z | MEMBER | @shoyer - I think I was tracking with you. I've gone ahead and deprecated the current |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stop loading tutorial data by default 377075253 | |
435732988 | https://github.com/pydata/xarray/pull/2538#issuecomment-435732988 | https://api.github.com/repos/pydata/xarray/issues/2538 | MDEyOklzc3VlQ29tbWVudDQzNTczMjk4OA== | shoyer 1217238 | 2018-11-05T01:59:34Z | 2018-11-05T01:59:34Z | MEMBER |
Sorry, to be clear what I meant here is that by default arrays loaded with NumPy get cached after the first/access/operation. Not that we need to preserve the existing behavior of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stop loading tutorial data by default 377075253 | |
435688958 | https://github.com/pydata/xarray/pull/2538#issuecomment-435688958 | https://api.github.com/repos/pydata/xarray/issues/2538 | MDEyOklzc3VlQ29tbWVudDQzNTY4ODk1OA== | shoyer 1217238 | 2018-11-04T17:29:11Z | 2018-11-04T17:29:11Z | MEMBER | OK, that seems reasonable. The default behavior should cache the arrays loaded with NumPy anyways. I would not be opposed to renaming this to open_dataset, either. On Sun, Nov 4, 2018 at 9:19 AM Joe Hamman notifications@github.com wrote:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stop loading tutorial data by default 377075253 | |
435688104 | https://github.com/pydata/xarray/pull/2538#issuecomment-435688104 | https://api.github.com/repos/pydata/xarray/issues/2538 | MDEyOklzc3VlQ29tbWVudDQzNTY4ODEwNA== | jhamman 2443309 | 2018-11-04T17:19:15Z | 2018-11-04T17:19:15Z | MEMBER | @shoyer - absolutely we'll get better performance with numpy arrays in this case. So I'm trying to use our tutorial datasets for some examples with dask (dask/dask-examples#51). The docstring for the
(3) won't require any changes but makes it a little harder to connect the typical use pattern of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stop loading tutorial data by default 377075253 | |
435621566 | https://github.com/pydata/xarray/pull/2538#issuecomment-435621566 | https://api.github.com/repos/pydata/xarray/issues/2538 | MDEyOklzc3VlQ29tbWVudDQzNTYyMTU2Ng== | shoyer 1217238 | 2018-11-03T21:17:02Z | 2018-11-03T21:17:02Z | MEMBER | Our current tutorial datasets are 8MB and 17MB, which is pretty small. You'll definitely get better performance loading datasets of this size into NumPy arrays. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Stop loading tutorial data by default 377075253 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2