issue_comments
4 rows where issue = 493058488 and user = 15016780 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- `ds.load()` with local files stalls and fails, and `to_zarr` does not include `store` in the dask graph · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
531617569 | https://github.com/pydata/xarray/issues/3306#issuecomment-531617569 | https://api.github.com/repos/pydata/xarray/issues/3306 | MDEyOklzc3VlQ29tbWVudDUzMTYxNzU2OQ== | abarciauskas-bgse 15016780 | 2019-09-16T01:22:09Z | 2019-09-16T01:22:09Z | NONE | Thanks @rabernat. I tried what you suggested (with a small subset, the source files are quite large) and it seems to work on smaller subsets, writing locally. Which leads me to suspect trying to run the same process with larger datasets might be overloading memory, but I can't assert the root cause yet. This isn't blocking my current strategy so closing for now. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`ds.load()` with local files stalls and fails, and `to_zarr` does not include `store` in the dask graph 493058488 | |
531493820 | https://github.com/pydata/xarray/issues/3306#issuecomment-531493820 | https://api.github.com/repos/pydata/xarray/issues/3306 | MDEyOklzc3VlQ29tbWVudDUzMTQ5MzgyMA== | abarciauskas-bgse 15016780 | 2019-09-14T16:34:56Z | 2019-09-14T16:34:56Z | NONE | I recall this also happening when storing locally but I can't reproduce that at the moment since the kubernetes cluster I am using now is not a pangeo hub and not setup to use EFS. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`ds.load()` with local files stalls and fails, and `to_zarr` does not include `store` in the dask graph 493058488 | |
531486715 | https://github.com/pydata/xarray/issues/3306#issuecomment-531486715 | https://api.github.com/repos/pydata/xarray/issues/3306 | MDEyOklzc3VlQ29tbWVudDUzMTQ4NjcxNQ== | abarciauskas-bgse 15016780 | 2019-09-14T15:03:04Z | 2019-09-14T15:03:04Z | NONE | @rabernat good points. One thing I'm not sure of how to make reproducible is calling a remote file store, since I think it usually requires calling to a write-protected cloud storage provider. Any tips on this? I have what should be an otherwise working example here: https://gist.github.com/abarciauskas-bgse/d0aac2ae9bf0b06f52a577d0a6251b2d - let me know if this is an ok format to share for reproducing the issue. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`ds.load()` with local files stalls and fails, and `to_zarr` does not include `store` in the dask graph 493058488 | |
531435069 | https://github.com/pydata/xarray/issues/3306#issuecomment-531435069 | https://api.github.com/repos/pydata/xarray/issues/3306 | MDEyOklzc3VlQ29tbWVudDUzMTQzNTA2OQ== | abarciauskas-bgse 15016780 | 2019-09-14T01:42:22Z | 2019-09-14T01:42:22Z | NONE | Update: I've made some progress on determining the source of this issue. It seems related to the source dataset's variables. When I use 2 opendap urls with 4 parameterized variables things work fine Using 2 urls like: I get back a dataset :
however if I omit the parameterized data variables using urls such as: I get back an additional variable:
In the first case (with the parameterized variables) I achieve the expected result (data is stored on S3). In the second case (no parameterized variables), |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`ds.load()` with local files stalls and fails, and `to_zarr` does not include `store` in the dask graph 493058488 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1