html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/3213#issuecomment-615500990,https://api.github.com/repos/pydata/xarray/issues/3213,615500990,MDEyOklzc3VlQ29tbWVudDYxNTUwMDk5MA==,449558,2020-04-17T23:07:57Z,2020-04-17T23:07:57Z,NONE,"@shoyer thanks! Mostly spitballing here, but it's interesting to know that 2) would be the bigger problem in your opinion, I had assumed 1) would be the main issue. That raises the question whether it's easier to wrap ``scipy.sparse`` in a duck array, or to make ``pydata/sparse`` a viable solution for sklearn.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,479942077 https://github.com/pydata/xarray/issues/3213#issuecomment-615497160,https://api.github.com/repos/pydata/xarray/issues/3213,615497160,MDEyOklzc3VlQ29tbWVudDYxNTQ5NzE2MA==,449558,2020-04-17T22:51:09Z,2020-04-17T22:51:09Z,NONE,"Small comment from #3981: sklearn has just started running benchmarks, but it looks like pydata/sparse is not feature complete enough for us to use. We might be interested in having scipy.sparse support in xarray. There are two problems with scipy.sparse for us as far as I can see (this is very preliminary): it only has COO, which is not good for us, and ideally we'd want to avoid memory copies whenever we want to use xarray, and I think going from scipy.sparse to pydata/sparse will involve memory copies, even if pydata/sparse adds other formats.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,479942077