issue_comments
5 rows where issue = 1284094480 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Avoid loading any data for reprs · 5 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1166471749 | https://github.com/pydata/xarray/issues/6722#issuecomment-1166471749 | https://api.github.com/repos/pydata/xarray/issues/6722 | IC_kwDOAMm_X85FhvJF | Illviljan 14371165 | 2022-06-26T09:41:00Z | 2022-06-26T09:41:00Z | MEMBER | Is the print still slow if somewhere just before the load the array was masked to only show a few start and end elements, |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Avoid loading any data for reprs 1284094480 | |
| 1166154340 | https://github.com/pydata/xarray/issues/6722#issuecomment-1166154340 | https://api.github.com/repos/pydata/xarray/issues/6722 | IC_kwDOAMm_X85Fghpk | scottyhq 3924836 | 2022-06-25T00:37:46Z | 2022-06-25T00:37:46Z | MEMBER | This would be a pretty small change and only applies for loading data into numpy arrays, for example current repr for a variable followed by modified for the example dataset above (which already happens for large arrays):
Seeing a few values at the edges can be nice, so this makes me realize how data summaries in the metadata (Zarr or STAC) is great for large datasets on cloud storage. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Avoid loading any data for reprs 1284094480 | |
| 1165982276 | https://github.com/pydata/xarray/issues/6722#issuecomment-1165982276 | https://api.github.com/repos/pydata/xarray/issues/6722 | IC_kwDOAMm_X85Ff3pE | dcherian 2448579 | 2022-06-24T22:09:56Z | 2022-06-24T22:09:56Z | MEMBER | I think the best thing to do is to not load anything unless asked to. So delete the |
{
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Avoid loading any data for reprs 1284094480 | |
| 1165975180 | https://github.com/pydata/xarray/issues/6722#issuecomment-1165975180 | https://api.github.com/repos/pydata/xarray/issues/6722 | IC_kwDOAMm_X85Ff16M | TomNicholas 35968931 | 2022-06-24T21:59:57Z | 2022-06-24T21:59:57Z | MEMBER | So what's the solution here? Add another condition checking for more than a certain number of variables? Somehow check whether a dataset is cloud-backed? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Avoid loading any data for reprs 1284094480 | |
| 1165854335 | https://github.com/pydata/xarray/issues/6722#issuecomment-1165854335 | https://api.github.com/repos/pydata/xarray/issues/6722 | IC_kwDOAMm_X85FfYZ_ | dcherian 2448579 | 2022-06-24T19:05:38Z | 2022-06-24T19:05:38Z | MEMBER | cc @e-marshall @scottyhq |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Avoid loading any data for reprs 1284094480 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);


user 4