html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/585#issuecomment-336494114,https://api.github.com/repos/pydata/xarray/issues/585,336494114,MDEyOklzc3VlQ29tbWVudDMzNjQ5NDExNA==,1217238,2017-10-13T15:58:30Z,2017-10-13T15:58:30Z,MEMBER,"@rabernat Agreed, let's open a new issue for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-336489532,https://api.github.com/repos/pydata/xarray/issues/585,336489532,MDEyOklzc3VlQ29tbWVudDMzNjQ4OTUzMg==,1197350,2017-10-13T15:41:32Z,2017-10-13T15:41:32Z,MEMBER,"This issue was closed by #1517. But there was plenty of discussion above about parallelizing groupby. Does #1517 make parallel groupby automatically work? My understanding is no. If that's the case, we probably need to open a new issue for parallel groupby.
cc @mrocklin ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-324518345,https://api.github.com/repos/pydata/xarray/issues/585,324518345,MDEyOklzc3VlQ29tbWVudDMyNDUxODM0NQ==,1217238,2017-08-24T02:52:26Z,2017-08-24T02:52:26Z,MEMBER,I have a preliminary implementation up in https://github.com/pydata/xarray/pull/1517,"{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-142482576,https://api.github.com/repos/pydata/xarray/issues/585,142482576,MDEyOklzc3VlQ29tbWVudDE0MjQ4MjU3Ng==,1217238,2015-09-23T03:49:46Z,2017-03-07T05:32:28Z,MEMBER,"Indeed, there's no need to load the entire dataset into memory first. I think open_mfdataset is the model to emulate here -- it's parallelism that just works.
I'm not quite sure how to do this transparently in groupby operations yet. The problem is that you do want to apply some groupby operations on dask arrays without loading the entire group into memory, if there are only a few groups on a large datasets and the function itself is written in terms of dask operations. I think we will probably need some syntax to disambiguate that scenario.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-249059201,https://api.github.com/repos/pydata/xarray/issues/585,249059201,MDEyOklzc3VlQ29tbWVudDI0OTA1OTIwMQ==,1328158,2016-09-22T23:39:41Z,2017-03-07T05:32:04Z,NONE,"This is good news for me as the functions I will apply take a ndarray as
input and return a corresponding ndarray as output. Once this is available
in xarray I'll be eager to give it a whirl...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-249011817,https://api.github.com/repos/pydata/xarray/issues/585,249011817,MDEyOklzc3VlQ29tbWVudDI0OTAxMTgxNw==,1217238,2016-09-22T20:00:57Z,2016-09-22T20:00:57Z,MEMBER,"I think #964 provides a viable path forward here.
Previously, I was imagining the user provides an function that maps `xarray.DataArray` -> `xarray.DataArray`. Such functions are tricky to parallelize with dask.array because need to run them to figure out the result dimensions/coordinates.
In contrast, with a user defined function `ndarray` -> `ndarray`, it's fairly straightforward to parallelize these with dask array (e.g., using `dask.array.elemwise` or `dask.array.map_blocks`). Then we could add the metadata back in afterwards with #964.
In principle, we could do this automatically -- especially if dask had a way to parallelize arbitrary NumPy generalized universal functions. Then the user could write something like `xarray.apply(func, data, signature=signature, dask_array='auto')` to automatically parallelize func over their data. In fact, I had this in some previous commits for #964, but took it out for now, just to reduce scope for the change.
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-248979862,https://api.github.com/repos/pydata/xarray/issues/585,248979862,MDEyOklzc3VlQ29tbWVudDI0ODk3OTg2Mg==,1197350,2016-09-22T18:00:24Z,2016-09-22T18:00:24Z,MEMBER,"Does #964 help on this?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-248969870,https://api.github.com/repos/pydata/xarray/issues/585,248969870,MDEyOklzc3VlQ29tbWVudDI0ODk2OTg3MA==,1328158,2016-09-22T17:23:22Z,2016-09-22T17:23:22Z,NONE,"I'm adding this note to express an interest in the functionality described in Stephan's original description, i.e. a `parallel_apply` method/function which would apply a function in parallel utilizing multiple CPUs. I have (finally) worked out how to use `groupby` and `apply` for my application but it would be much more useful if I could apply functions in parallel to take advantage of multiple CPUs. What's the expected effort to make something like this available in xarray? Several months ago I worked on doing this sort of thing without xarray using the multiprocessing module and a shared memory object and I may revisit that soon, but I expect that a solution using xarray will be more elegant so if such a thing is coming in the foreseeable future then I may wait on that and focus on other tasks. Can anyone advise?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-226262120,https://api.github.com/repos/pydata/xarray/issues/585,226262120,MDEyOklzc3VlQ29tbWVudDIyNjI2MjEyMA==,1217238,2016-06-15T17:37:11Z,2016-06-15T17:37:11Z,MEMBER,"With the single machine version of dask, we need to run one block first to infer the appropriate metadata for constructing the combined dataset.
Potentially a better approach would be to optionally leverage dask.distributed, which has the ability to run computation at the same time as graph construction. `map_blocks` could then kick off a bunch of map tasks to execute in parallel, and only worry about reassembling the blocks in a reduce after the results have come in.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-143692567,https://api.github.com/repos/pydata/xarray/issues/585,143692567,MDEyOklzc3VlQ29tbWVudDE0MzY5MjU2Nw==,1197350,2015-09-28T09:43:17Z,2015-09-28T09:43:17Z,MEMBER,":+1: Very useful idea!
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151
https://github.com/pydata/xarray/issues/585#issuecomment-142480620,https://api.github.com/repos/pydata/xarray/issues/585,142480620,MDEyOklzc3VlQ29tbWVudDE0MjQ4MDYyMA==,5356122,2015-09-23T03:32:23Z,2015-09-23T03:32:23Z,MEMBER,"But do the xray objects have to exist in memory? I was thinking this could also work along with `open_mfdataset`. It just loads and operates on the chunk it needs.
Like the idea of applying this to groupby objects. I wonder if it could be done transparently to the user...
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,107424151