issue_comments
4 rows where author_association = "NONE" and user = 11863789 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- hansukyang · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
892159149 | https://github.com/pydata/xarray/issues/5604#issuecomment-892159149 | https://api.github.com/repos/pydata/xarray/issues/5604 | IC_kwDOAMm_X841LUSt | hansukyang 11863789 | 2021-08-03T20:54:30Z | 2021-08-03T20:54:30Z | NONE | I don't know if this is related but recent updates of Dask has very large memory usage (after 2021.03 version) that I'm not sure is getting addressed yet (https://github.com/dask/dask/issues/7583). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Extremely Large Memory usage for a very small variable 944996552 | |
691756409 | https://github.com/pydata/xarray/issues/4406#issuecomment-691756409 | https://api.github.com/repos/pydata/xarray/issues/4406 | MDEyOklzc3VlQ29tbWVudDY5MTc1NjQwOQ== | hansukyang 11863789 | 2020-09-14T01:03:54Z | 2020-09-14T01:03:54Z | NONE | Good point! Yes, after a bit of trial and error, this is what I did. Is there any limitation when over-writing an existing NetCDF file that hasn't been opened by xarray? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Threading Lock issue with to_netcdf and Dask arrays 694112301 | |
691670151 | https://github.com/pydata/xarray/issues/4406#issuecomment-691670151 | https://api.github.com/repos/pydata/xarray/issues/4406 | MDEyOklzc3VlQ29tbWVudDY5MTY3MDE1MQ== | hansukyang 11863789 | 2020-09-13T13:15:30Z | 2020-09-13T13:15:30Z | NONE | For my case, I saw this happen only when I started to run xarray scripts with cron, about a month ago. I would run it once every six hours and every day or so, I would see a NetCDF file locked up. I ended up changing the work flow somewhat so I don't do this any more (was using xarray to manipulate NetCDF and re-write to it) but this was confusing me for quite a while. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Threading Lock issue with to_netcdf and Dask arrays 694112301 | |
688101357 | https://github.com/pydata/xarray/issues/4406#issuecomment-688101357 | https://api.github.com/repos/pydata/xarray/issues/4406 | MDEyOklzc3VlQ29tbWVudDY4ODEwMTM1Nw== | hansukyang 11863789 | 2020-09-07T07:28:37Z | 2020-09-07T07:46:39Z | NONE | I seem to also have similar issue, running under docker/linux environment. It doesn't happen always, maybe once out of 4~5 times. Wondering if this is related to NetCDF/HDF5 file locking issue (https://support.nesi.org.nz/hc/en-gb/articles/360000902955-NetCDF-HDF5-file-locking). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Threading Lock issue with to_netcdf and Dask arrays 694112301 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 2