sha,message,author_date,committer_date,raw_author,raw_committer,repo,author,committer 13c09dc28ec8ff791c6d87e2d8e80c362c65ffd4,"Fixed dask.optimize on datasets (#4438) * Fixed dask.optimize on datasets Another attempt to fix #3698. The issue with my fix in is that we hit `Variable._dask_finalize` in both `dask.optimize` and `dask.persist`. We want to do the culling of unnecessary tasks (`test_persist_Dataset`) but only in the persist case, not optimize (`test_optimize`). * Update whats-new.rst * Update doc/whats-new.rst Co-authored-by: Deepak Cherian Co-authored-by: Maximilian Roos <5635139+max-sixty@users.noreply.github.com>",2020-09-20T05:21:56Z,2020-09-20T05:21:56Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,cd792325681cbad9f663f2879d8b69f1edbb678f,13221727,1312546,19864447 9a8a62ba551e737dc87e39aded2f7cc788ff118d,"Fix optimize for chunked DataArray (#4432) Previously we generated in invalidate Dask task graph, becuase the lines removed here dropped keys that were referenced elsewhere in the task graph. The original implementation had a comment indicating that this was to cull: https://github.com/pydata/xarray/blame/502a988ad5b87b9f3aeec3033bf55c71272e1053/xarray/core/variable.py#L384 Just spot-checking things, I think we're OK here though. Something like `dask.visualize(arr[[0]], optimize_graph=True)` indicates that we're OK. Closes https://github.com/pydata/xarray/issues/3698 Co-authored-by: Maximilian Roos <5635139+max-sixty@users.noreply.github.com>",2020-09-17T23:19:22Z,2020-09-17T23:19:22Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,cd792325681cbad9f663f2879d8b69f1edbb678f,13221727,1312546,19864447 e1dafe676812409834ccac3418ecf47600b00615,Fix map_blocks example (#4305),2020-08-04T03:38:50Z,2020-08-04T03:38:50Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,cd792325681cbad9f663f2879d8b69f1edbb678f,13221727,1312546,19864447 5200a182f324be21423fd2f8214b8ef04b5845ce,"Update map_blocks and map_overlap docstrings (#4303) This reference an `obj` argument that only exists in parallel. The object being referenced is actually `self`.",2020-08-03T18:06:10Z,2020-08-03T18:06:10Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,cd792325681cbad9f663f2879d8b69f1edbb678f,13221727,1312546,19864447 cafcaeea897894e3a2f44a38bd33c50a48c86215,"Fix map_blocks HLG layering (#3598) * Fix map_blocks HLG layering This fixes an issue with the HighLevelGraph noted in https://github.com/pydata/xarray/pull/3584, and exposed by a recent change in Dask to do more HLG fusion. * update * black * update",2019-12-07T04:30:18Z,2019-12-07T04:30:18Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,0c7e9e762dbfd6554e60c953bf27493047d95109,13221727,1312546,2448579 ec255eba7cce749c25e1d7b6f0a7fc537ff61841,Update asv.conf.json (#2693),2019-01-19T17:45:19Z,2019-01-19T17:45:19Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,f10b21bed2846b879806f87039b77245b18e7671,13221727,1312546,1217238 8e541deca2e20efe080aa1bca566d9966ea2f244,"Added show_commit_url to asv.conf (#1515) * Added show_commit_url to asv.conf This should setup the proper links from the published output to the commit on Github. FYI the benchmarks should be running stably now, and posted to http://pandas.pydata.org/speed/xarray. http://pandas.pydata.org/speed/xarray/regressions.xml has an RSS feed to the regressions. * Update asv.conf.json",2017-08-23T16:01:49Z,2017-08-23T16:01:49Z,414a3ca56e5eb92bdfc6b3cac35417bf5ba51f54,5f199557d0f8f69fbea5e027a407146e2669a812,13221727,1312546,