issue_comments
10 rows where issue = 277538485 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
issue 1
- open_mfdataset() memory error in v0.10 · 10 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 356390513 | https://github.com/pydata/xarray/issues/1745#issuecomment-356390513 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1NjM5MDUxMw== | shoyer 1217238 | 2018-01-09T19:36:10Z | 2018-01-09T19:36:10Z | MEMBER | Both the warning message and the upstream anaconda issue seem like good ideas to me. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 352152392 | https://github.com/pydata/xarray/issues/1745#issuecomment-352152392 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MjE1MjM5Mg== | shoyer 1217238 | 2017-12-16T01:58:02Z | 2017-12-16T01:58:02Z | MEMBER | If upgrating to a newer version of netcdf4-python isn't an option we might need to figure out a workaround for xarray.... It seems that anaconda is still distributing netCDF4 1.2.4, which doesn't help here. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 351788352 | https://github.com/pydata/xarray/issues/1745#issuecomment-351788352 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MTc4ODM1Mg== | shoyer 1217238 | 2017-12-14T17:58:05Z | 2017-12-14T17:58:05Z | MEMBER | Can you reproduce this just using netCDF4-python? Try: ``` import netCDF4 ds = netCDF4.Dataset(path) print(ds)print(ds.filepath()) ``` If so, it would be good to file a bug upstream. Actually, it looks like this might be https://github.com/Unidata/netcdf4-python/issues/506 |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 351783850 | https://github.com/pydata/xarray/issues/1745#issuecomment-351783850 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MTc4Mzg1MA== | shoyer 1217238 | 2017-12-14T17:41:05Z | 2017-12-14T17:41:11Z | MEMBER | I think there is probably a bug buried inside the |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 351780487 | https://github.com/pydata/xarray/issues/1745#issuecomment-351780487 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MTc4MDQ4Nw== | shoyer 1217238 | 2017-12-14T17:28:37Z | 2017-12-14T17:28:37Z | MEMBER | @braaannigan can you try adding |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 351779445 | https://github.com/pydata/xarray/issues/1745#issuecomment-351779445 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MTc3OTQ0NQ== | shoyer 1217238 | 2017-12-14T17:24:40Z | 2017-12-14T17:24:40Z | MEMBER |
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 351765967 | https://github.com/pydata/xarray/issues/1745#issuecomment-351765967 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MTc2NTk2Nw== | shoyer 1217238 | 2017-12-14T16:41:19Z | 2017-12-14T16:41:19Z | MEMBER | @braaannigan what about replacing |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 351470450 | https://github.com/pydata/xarray/issues/1745#issuecomment-351470450 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM1MTQ3MDQ1MA== | shoyer 1217238 | 2017-12-13T17:54:54Z | 2017-12-13T17:54:54Z | MEMBER | @braaannigan Can you share the name of your problematic file? One possibility is that LOCK = threading.Lock() def is_remote_uri(path): with LOCK: return bool(re.search('^https?\://', path)) ``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 347819491 | https://github.com/pydata/xarray/issues/1745#issuecomment-347819491 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM0NzgxOTQ5MQ== | shoyer 1217238 | 2017-11-29T10:34:25Z | 2017-11-29T10:34:25Z | MEMBER |
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 | |
| 347811473 | https://github.com/pydata/xarray/issues/1745#issuecomment-347811473 | https://api.github.com/repos/pydata/xarray/issues/1745 | MDEyOklzc3VlQ29tbWVudDM0NzgxMTQ3Mw== | shoyer 1217238 | 2017-11-29T10:03:51Z | 2017-11-29T10:03:51Z | MEMBER | I think this was introduced by https://github.com/pydata/xarray/pull/1551, where we started loading coordinates that are compared for equality into memory. This speeds up We might consider adding an option for reduced memory usage at the price of speed. @crusaderky @jhamman @rabernat any thoughts? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
open_mfdataset() memory error in v0.10 277538485 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1