issue_comments
4 rows where issue = 224553135 and user = 44488331 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- slow performance with open_mfdataset · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1043022273 | https://github.com/pydata/xarray/issues/1385#issuecomment-1043022273 | https://api.github.com/repos/pydata/xarray/issues/1385 | IC_kwDOAMm_X84-K0HB | jtomfarrar 44488331 | 2022-02-17T14:42:41Z | 2022-02-17T14:42:41Z | NONE | Thank you. A member of my research group made the netcdf file, so we will make a second file with the time encoding fixed. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
slow performance with open_mfdataset 224553135 | |
| 1043009735 | https://github.com/pydata/xarray/issues/1385#issuecomment-1043009735 | https://api.github.com/repos/pydata/xarray/issues/1385 | IC_kwDOAMm_X84-KxDH | jtomfarrar 44488331 | 2022-02-17T14:30:03Z | 2022-02-17T14:30:03Z | NONE | Thank you, Ryan. I will post the file to a server with a stable URL and replace the google drive link in the other post. My original issue was that I wanted to not read the data (yet), only to have a look at the metadata. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
slow performance with open_mfdataset 224553135 | |
| 1042962960 | https://github.com/pydata/xarray/issues/1385#issuecomment-1042962960 | https://api.github.com/repos/pydata/xarray/issues/1385 | IC_kwDOAMm_X84-KloQ | jtomfarrar 44488331 | 2022-02-17T13:43:21Z | 2022-02-17T13:43:21Z | NONE | Thanks, Ryan! Sure-- here's a link to the file: https://drive.google.com/file/d/1-05bG2kF8wbvldYtDpZ3LYLyqXnvZyw1/view?usp=sharing (I could post to a web server if there's any reason to prefer that.) |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
slow performance with open_mfdataset 224553135 | |
| 1042930077 | https://github.com/pydata/xarray/issues/1385#issuecomment-1042930077 | https://api.github.com/repos/pydata/xarray/issues/1385 | IC_kwDOAMm_X84-Kdmd | jtomfarrar 44488331 | 2022-02-17T13:06:18Z | 2022-02-17T13:06:18Z | NONE | @rabernat wrote:
I seem to be experiencing a similar (same?) issue with open_dataset: https://stackoverflow.com/questions/71147712/can-i-force-xarray-open-dataset-to-do-a-lazy-load?stw=2 |
{
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} |
slow performance with open_mfdataset 224553135 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1