html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4234#issuecomment-739094107,https://api.github.com/repos/pydata/xarray/issues/4234,739094107,MDEyOklzc3VlQ29tbWVudDczOTA5NDEwNw==,2448579,2020-12-05T00:48:49Z,2020-12-05T00:48:49Z,MEMBER,"The indexes story will change soon, we may even have our own index classes. We should have pretty decent support for NEP-18 arrays in `DataArray.data` though, so IMO that's the best thing to try out and see where the issues remain. NEP35 is cool; looks like we should use it in our `*_like` functions.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,659129613 https://github.com/pydata/xarray/issues/4234#issuecomment-739066140,https://api.github.com/repos/pydata/xarray/issues/4234,739066140,MDEyOklzc3VlQ29tbWVudDczOTA2NjE0MA==,1403768,2020-12-04T22:56:23Z,2020-12-04T22:56:23Z,MEMBER,"Wanted to update on some recent work. With NEP35 (https://github.com/numpy/numpy/pull/16935, https://github.com/numpy/numpy/pull/17787) experimentally in NumPy, I've been exploring a bit more on using GPUs with xarray starting off with basic groupby workflows. There are places in the code where xarray calls pandas directly. For example, when building Indexes: https://github.com/pydata/xarray/blob/7152b41fa80a56db0ce88b241fbe4092473cfcf0/xarray/core/dataset.py#L150-L153 This is going to be challenging primarily because xarray will need to determine which DataFrame library to use during Index creation (possibly other DataFrame objects). While there is a [consortium](https://data-apis.org/) developing potential solutions for libraries to leverage multiple dataframe libraries I was going to keep hacking away and see what other issues may be lurking. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,659129613 https://github.com/pydata/xarray/issues/4234#issuecomment-660180546,https://api.github.com/repos/pydata/xarray/issues/4234,660180546,MDEyOklzc3VlQ29tbWVudDY2MDE4MDU0Ng==,2448579,2020-07-17T15:46:15Z,2020-07-17T15:46:15Z,MEMBER,"See similar discussion for sparse here: https://github.com/pydata/xarray/issues/3245 `asarray` makes sense to me. I think we are also open to special `as_sparse`, `as_dense`, `as_cupy` that return xarray objects with converted arrays. A `to_numpy_data` method (or `as_numpy`?) would always coerce to numpy appropriately. IIRC there's some way to read from disk to GPU, isn't there? So it makes sense to expose that in our `open_*` functions. Re: index variables.Can we avoid this for now? Or are there going to be performance issues? The general problem will be handled as part of the index refactor (we've deferred pint support for indexes for this reason).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,659129613