issue_comments
5 rows where author_association = "MEMBER" and issue = 344621749 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Chunked processing across multiple raster (geoTIF) files · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1488891109 | https://github.com/pydata/xarray/issues/2314#issuecomment-1488891109 | https://api.github.com/repos/pydata/xarray/issues/2314 | IC_kwDOAMm_X85Yvqzl | dcherian 2448579 | 2023-03-29T16:01:05Z | 2023-03-29T16:01:05Z | MEMBER | We've deleted the internal |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Chunked processing across multiple raster (geoTIF) files 344621749 | |
417413527 | https://github.com/pydata/xarray/issues/2314#issuecomment-417413527 | https://api.github.com/repos/pydata/xarray/issues/2314 | MDEyOklzc3VlQ29tbWVudDQxNzQxMzUyNw== | shoyer 1217238 | 2018-08-30T18:04:29Z | 2018-08-30T18:04:29Z | MEMBER | I see now that you are using dask-distributed, but I guess there are still too many intermediate outputs here to do a single rechunk operation. The crude but effective way to solve this problem would be to loop over spatial tiles using an indexing operation to pull out only a limited extent, compute the calculation on each tile and then reassemble the tiles at the end. To see if this will work, you might try computing a single time-series on your merged dataset before calling In theory, I think using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Chunked processing across multiple raster (geoTIF) files 344621749 | |
417412405 | https://github.com/pydata/xarray/issues/2314#issuecomment-417412405 | https://api.github.com/repos/pydata/xarray/issues/2314 | MDEyOklzc3VlQ29tbWVudDQxNzQxMjQwNQ== | scottyhq 3924836 | 2018-08-30T18:01:02Z | 2018-08-30T18:01:02Z | MEMBER | As @darothen mentioned, first thing is to check that the geotiffs themselves are tiled (otherwise I'm guessing that open_rasterio() will open the entire thing. You can do this with:
Here is the mentioned example notebook which works for tiled geotiffs stored on google cloud: https://github.com/scottyhq/pangeo-example-notebooks/tree/binderfy You can use the 'launch binder' button to run it with a pangeo dask-kubernetes cluster, or just read through the landsat8-cog-ndvi.ipynb notebook. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Chunked processing across multiple raster (geoTIF) files 344621749 | |
417404832 | https://github.com/pydata/xarray/issues/2314#issuecomment-417404832 | https://api.github.com/repos/pydata/xarray/issues/2314 | MDEyOklzc3VlQ29tbWVudDQxNzQwNDgzMg== | shoyer 1217238 | 2018-08-30T17:38:40Z | 2018-08-30T17:42:00Z | MEMBER | I think the explicit ~~If you drop the line that calls |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Chunked processing across multiple raster (geoTIF) files 344621749 | |
417135276 | https://github.com/pydata/xarray/issues/2314#issuecomment-417135276 | https://api.github.com/repos/pydata/xarray/issues/2314 | MDEyOklzc3VlQ29tbWVudDQxNzEzNTI3Ng== | jhamman 2443309 | 2018-08-29T23:04:10Z | 2018-08-29T23:04:10Z | MEMBER | pinging @scottyhq and @darothen who have both been exploring similar use cases here. I think you all met at the recent pangeo meeting. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Chunked processing across multiple raster (geoTIF) files 344621749 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4