issues: 187873247
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
187873247 | MDU6SXNzdWUxODc4NzMyNDc= | 1094 | Supporting out-of-core computation/indexing for very large indexes | 4160723 | open | 0 | 5 | 2016-11-08T00:56:56Z | 2021-01-26T20:09:12Z | MEMBER | (Follow-up of discussion here https://github.com/pydata/xarray/pull/1024#issuecomment-258524115). xarray + dask.array successfully enable out-of-core computation for very large variables that doesn't fit in memory. One current limitation is that the indexes of a However, this may be problematic in some specific cases where we have to deal with very large indexes. As an example, big unstructured meshes often have coordinates (x, y, z) arranged as 1-d arrays of length that equals the number of nodes, which can be very large!! (See, e.g., ugrid conventions). It would be very nice if xarray could also help for these use cases. Therefore I'm wondering if (and how) out-of-core support can be extended to indexes and indexing. I've briefly looked at the documentation on My knowledge of dask is very limited, though. So I've no doubt that this suggestion is very simplistic and not very efficient, or that there are better approaches. I'm also certainly missing other issues not directly related to indexing. Any thoughts? cc @shoyer @mrocklin |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
13221727 | issue |