html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/7650#issuecomment-1521804722,https://api.github.com/repos/pydata/xarray/issues/7650,1521804722,IC_kwDOAMm_X85atOWy,6213168,2023-04-25T13:36:43Z,2023-04-25T13:42:48Z,MEMBER,I can see that the conda-forge-feedstock has been updated; however requirements.txt still contains the pin,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1632422255
https://github.com/pydata/xarray/pull/7650#issuecomment-1521662528,https://api.github.com/repos/pydata/xarray/issues/7650,1521662528,IC_kwDOAMm_X85asrpA,6213168,2023-04-25T11:56:32Z,2023-04-25T11:56:32Z,MEMBER,All blockers to pandas 2 linked in this issue have been merged; is there anything outstanding?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1632422255
https://github.com/pydata/xarray/pull/7461#issuecomment-1517892096,https://api.github.com/repos/pydata/xarray/issues/7461,1517892096,IC_kwDOAMm_X85aeTIA,6213168,2023-04-21T14:07:08Z,2023-04-21T14:07:08Z,MEMBER,"It just occurred to me that xarray dropped Python 3.8 3 months before what NEP-29 recommends. I think this is a problem.
Let's continue this discussion on #7777.","{""total_count"": 3, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 1, ""heart"": 0, ""rocket"": 0, ""eyes"": 2}",,1550109629
https://github.com/pydata/xarray/pull/7461#issuecomment-1515950820,https://api.github.com/repos/pydata/xarray/issues/7461,1515950820,IC_kwDOAMm_X85aW5Lk,6213168,2023-04-20T08:43:19Z,2023-04-20T08:43:19Z,MEMBER,">
> This also breaks xarray on ubuntu 20.04 which ships with Python 3.8 and is supported until April 2025. Python 3.8 is also supported at least until October 2024.
Not investing effort to support 5 years old dependencies was a very conscious decision. This is not something unique we do; we simply adhere to NEP29: https://numpy.org/neps/nep-0029-deprecation_policy.html
If for whatever reason you want to use python shipped by ubuntu 20.04, as opposed to conda/venv/poetry/whatever, you should also be prepared to stick to older versions of the python packages. Note that 5 years is the duration of *security* support. I'm not personally aware of security issues in xarray since Python 3.8 was abandoned (I've been a bit out of the loop and I could be proven wrong), but in the unlikely event that one should arise in xarray, we would consider a backport to Python 3.8.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1550109629
https://github.com/pydata/xarray/pull/7461#issuecomment-1507163165,https://api.github.com/repos/pydata/xarray/issues/7461,1507163165,IC_kwDOAMm_X85Z1Xwd,6213168,2023-04-13T15:21:52Z,2023-04-13T15:21:52Z,MEMBER,"> I assume you have given this a lot of thought, but imho the minimum dependency versions should be decided according to features needed, not timing.
It's not based on timing.
The policy is there so that, when a developer finds that they have to do extra labour to support an old version of a dependency, they can instead drop the support for the old version without needing to seek approval from the maintainers.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1550109629
https://github.com/pydata/xarray/issues/6803#issuecomment-1280764797,https://api.github.com/repos/pydata/xarray/issues/6803,1280764797,IC_kwDOAMm_X85MVut9,6213168,2022-10-17T12:15:36Z,2022-10-17T12:20:02Z,MEMBER,"```python
new_data_future = xr.apply_ufunc(
_copy_test,
data,
a_x,
...
)
```
*instead* of using kwargs.
I've opened https://github.com/dask/distributed/issues/7140 to simplify this. With it implemented, my snippet
```python
test = np.full((20,), 30)
a = da.from_array(test)
dsk = client.scatter(dict(a.dask), broadcast=True)
a = da.Array(dsk, name=a.name, chunks=a.chunks, dtype=a.dtype, meta=a._meta, shape=a.shape)
a_x = xarray.DataArray(a, dims=[""new_z""])
```
would become
```python
test = np.full((20,), 30)
a_x = xarray.DataArray(test, dims=[""new_z""]).chunk()
a_x = client.scatter(a_x)
```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1307523148
https://github.com/pydata/xarray/issues/6803#issuecomment-1280746923,https://api.github.com/repos/pydata/xarray/issues/6803,1280746923,IC_kwDOAMm_X85MVqWr,6213168,2022-10-17T12:01:17Z,2022-10-17T12:01:17Z,MEMBER,"Having said the above, your design is... contrived.
There isn't, as of today, a straightforward way to scatter a local dask collection (`persist()` will push the whole thing through the scheduler and likely send it out of memory).
Workaround:
```python
test = np.full((20,), 30)
a = da.from_array(test)
dsk = client.scatter(dict(a.dask), broadcast=True)
a = da.Array(dsk, name=a.name, chunks=a.chunks, dtype=a.dtype, meta=a._meta, shape=a.shape)
a_x = xarray.DataArray(a, dims=[""new_z""])
```
Once you have a_x, you just pass it to the args (not kwargs) of apply_ufunc.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1307523148
https://github.com/pydata/xarray/issues/6803#issuecomment-1280729879,https://api.github.com/repos/pydata/xarray/issues/6803,1280729879,IC_kwDOAMm_X85MVmMX,6213168,2022-10-17T11:45:31Z,2022-10-17T11:45:31Z,MEMBER,"> This is still an issue. I noticed that the documentation of `map_blocks` states: **kwargs** ([mapping](https://docs.python.org/3/glossary.html#term-mapping)) – Passed verbatim to func after unpacking. xarray objects, if any, will not be subset to blocks. _Passing dask collections in kwargs is not allowed_.
>
> Is this the case for `apply_ufunc` as well?
test_future is not a dask collection. It's a distributed.Future, which points to an arbitrary, opaque data blob that xarray has no means to know about.
FWIW, I could reproduce the issue, where the future in the kwargs is not resolved to the data it points to as one would expect.
Minimal reproducer:
```python
import distributed
import xarray
client = distributed.Client(processes=False)
x = xarray.DataArray([1, 2]).chunk()
test_future = client.scatter(""Hello World"")
def f(d, test):
print(test)
return d
y = xarray.apply_ufunc(
f,
x,
dask='parallelized',
output_dtypes=""float64"",
kwargs={'test':test_future},
)
y.compute()
```
Expected print output: `Hello World`
Actual print output: `
`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1307523148
https://github.com/pydata/xarray/pull/6566#issuecomment-1124342119,https://api.github.com/repos/pydata/xarray/issues/6566,1124342119,IC_kwDOAMm_X85DBBln,6213168,2022-05-11T22:12:24Z,2022-05-11T22:12:24Z,MEMBER,"> Lets skip windows for now.
>
> @crusaderky this looks weird:
>
> > For some reason counting the number of tasks in the dask graph via len(ds.**dask_graph**()) [raises an Error on Windows](https://github.com/pydata/xarray/runs/6393168006?check_suite_focus=true).
I think that's the context manager teardown, not the task counting","{""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1223270563
https://github.com/pydata/xarray/pull/6211#issuecomment-1026020646,https://api.github.com/repos/pydata/xarray/issues/6211,1026020646,IC_kwDOAMm_X849J9Um,6213168,2022-01-31T17:21:40Z,2022-01-31T17:21:40Z,MEMBER,"> More than a one-off — I got them multiple times. Though they look like tests that are liable to be flaky...
They are both tests that fail on the cleanup of ``@gen_cluster``, and specifically on ``check_process_leak``.
However, in both cases, the test itself doesn't spawn any processes. What I think is happening is that something unrelated, at some point *before* the failing tests, spawned a subprocess that has become unresponsive to SIGTERM.
I'm updating ``check_process_leak`` to be more aggressive in the cleanup before the test.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1118729170
https://github.com/pydata/xarray/issues/5648#issuecomment-924147211,https://api.github.com/repos/pydata/xarray/issues/5648,924147211,IC_kwDOAMm_X843FV4L,6213168,2021-09-21T16:22:11Z,2021-09-21T16:22:11Z,MEMBER,I'd like to attend too,"{""total_count"": 2, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,956103236
https://github.com/pydata/xarray/issues/5654#issuecomment-906684380,https://api.github.com/repos/pydata/xarray/issues/5654,906684380,IC_kwDOAMm_X842Cufc,6213168,2021-08-26T19:30:25Z,2021-08-26T19:50:20Z,MEMBER,"Third and final issue is when ``numpy.broadcast_to`` is applied to the output of zeros_like:
```
>>> import sparse
>>> s = sparse.COO.from_numpy([0, 0, 1, 2])
>>> np.broadcast_to(np.zeros_like(s.todense(), shape=()), (3, ))
array([0, 0, 0])
>>> np.broadcast_to(np.zeros_like(s, shape=()), (3, ))
ValueError: The data length does not match the coordinates given.
len(data) = 0, but 3 coords specified.
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/issues/5654#issuecomment-906633901,https://api.github.com/repos/pydata/xarray/issues/5654,906633901,IC_kwDOAMm_X842CiKt,6213168,2021-08-26T18:15:29Z,2021-08-26T19:47:12Z,MEMBER,"Ah, shape= was very recently added in 0.12.0. It wasn't there in 0.11.2.
[EDIT] It is not the only problem. Investigating further.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/issues/5654#issuecomment-906656030,https://api.github.com/repos/pydata/xarray/issues/5654,906656030,IC_kwDOAMm_X842Cnke,6213168,2021-08-26T18:47:32Z,2021-08-26T18:47:42Z,MEMBER,"The second issue is that ``sparse.zeros_like`` doesn't accept the ``order=`` parameter, which is required by the same dask code linked above (it's in the kwargs in dask/wrap.py:133). This in turn triggers an unfortunate handling of TypeError on behalf of ``@curry``, which obfuscates the exception.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/issues/5654#issuecomment-906621248,https://api.github.com/repos/pydata/xarray/issues/5654,906621248,IC_kwDOAMm_X842CfFA,6213168,2021-08-26T17:56:52Z,2021-08-26T18:02:53Z,MEMBER,"``da.zeros_like(a)``
internally invokes
``da.zeros(a.shape, meta=a._meta)``
which internally invokes
``np.broadcast(np.zeros_like(a._meta, shape=1), a.shape)``
The problem is that sparse.zeros_like does not accept the shape= optional parameter, which is new in numpy 1.17.
This is where it gets triggered:
https://github.com/dask/dask/blob/85f0b14bd36a5135ce51aeee067b6207374b00c4/dask/array/wrap.py#L128-L168
I don't think dask should write a workaround to this, and it should be just fixed upstream? CC @jrbourbeau for opinion.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/issues/5654#issuecomment-906584459,https://api.github.com/repos/pydata/xarray/issues/5654,906584459,IC_kwDOAMm_X842CWGL,6213168,2021-08-26T17:02:09Z,2021-08-26T17:02:09Z,MEMBER,"Narrowed it down.
```python
>>> import dask.array as da
>>> import sparse
>>> s = sparse.COO.from_numpy([0, 0, 1, 2])
>>> a = da.from_array(s)
>>> z = da.zeros_like(a)
>>> z
dask.array
>>> z.compute()
```
numpy-1.20.1
dask-2021.3.0
sparse-0.11.2
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/issues/5654#issuecomment-906479974,https://api.github.com/repos/pydata/xarray/issues/5654,906479974,IC_kwDOAMm_X842B8lm,6213168,2021-08-26T14:48:38Z,2021-08-26T14:48:38Z,MEMBER,I see now. I got really confused by the matplotlib issue which is what the opening post of this ticket is about. Would it be possible to have the two issues tracked by separate tickets?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/issues/5654#issuecomment-906475469,https://api.github.com/repos/pydata/xarray/issues/5654,906475469,IC_kwDOAMm_X842B7fN,6213168,2021-08-26T14:43:13Z,2021-08-26T14:43:13Z,MEMBER,"I'm a bit lost.
The failure in ``xarray/tests/test_sparse.py::test_chunk`` doesn't appear anywhere in recent CI runs and I can't reproduce it locally.
The ongoing failures in upstream-dev:
```
FAILED xarray/tests/test_plot.py::TestFacetGrid::test_can_set_norm
FAILED xarray/tests/test_plot.py::TestCFDatetimePlot::test_cfdatetime_line_plot
FAILED xarray/tests/test_plot.py::TestCFDatetimePlot::test_cfdatetime_pcolormesh_plot
FAILED xarray/tests/test_plot.py::TestCFDatetimePlot::test_cfdatetime_contour_plot
```
Aren't in any way related to either sparse or dask; they appear when I upgrade matplotlib from 3.4.3 to 3.5.0b1.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,957131705
https://github.com/pydata/xarray/pull/5610#issuecomment-880880580,https://api.github.com/repos/pydata/xarray/issues/5610,880880580,MDEyOklzc3VlQ29tbWVudDg4MDg4MDU4MA==,6213168,2021-07-15T17:26:12Z,2021-07-15T17:26:28Z,MEMBER,"> Unless you want to add a ""internals"" whats-new.rst entry?
I think it may be a bit overkill?","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,945560052
https://github.com/pydata/xarray/pull/5445#issuecomment-861454733,https://api.github.com/repos/pydata/xarray/issues/5445,861454733,MDEyOklzc3VlQ29tbWVudDg2MTQ1NDczMw==,6213168,2021-06-15T12:25:58Z,2021-06-15T12:25:58Z,MEMBER,"LGTM.
Note that the function doesn't align indices.
e.g. if you have:
```python
a = DataArray([0,1,2,3], dims=[""x""], coords={""x"": [0,10,20,30]}).chunk(3)
b = DataArray([0,1,2,3], dims=[""x""], coords={""x"": [10,30,40,50]}).chunk(2)
a, b = unify_chunks(a, b)
```
You'll end up with aligned chunks, but not aligned coords (e.g. both outputs have still values=[0,1,2,3]), which doesn't make much sense.
I think this is OK to leave it as it is in this specific case the issue should not do much harm anyway and it would just be a slowdown in most cases; I'd like to hear @dcherian's or @jhamman's opinions though.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,912932344
https://github.com/pydata/xarray/issues/5165#issuecomment-823950210,https://api.github.com/repos/pydata/xarray/issues/5165,823950210,MDEyOklzc3VlQ29tbWVudDgyMzk1MDIxMA==,6213168,2021-04-21T10:17:37Z,2021-04-21T10:17:37Z,MEMBER,"Reproduced and reopened at https://github.com/dask/dask/issues/7583.
This impacts the threading and sync scheduler. processes and distributed are unaffacted.
Closing here, as this issue has nothing to do with xarray.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,859218255
https://github.com/pydata/xarray/pull/4965#issuecomment-789770103,https://api.github.com/repos/pydata/xarray/issues/4965,789770103,MDEyOklzc3VlQ29tbWVudDc4OTc3MDEwMw==,6213168,2021-03-03T14:52:53Z,2021-03-03T14:52:53Z,MEMBER,This is ready for review and merge,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,817271773
https://github.com/pydata/xarray/issues/4860#issuecomment-774471403,https://api.github.com/repos/pydata/xarray/issues/4860,774471403,MDEyOklzc3VlQ29tbWVudDc3NDQ3MTQwMw==,6213168,2021-02-06T12:38:11Z,2021-02-06T12:38:23Z,MEMBER,"@keewis looks like the xarray code is making assumptions on the dask internal implementation, instead of relying on the public interface alone. I'm on it; expect a fix very shortly.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,800825901
https://github.com/pydata/xarray/issues/683#issuecomment-682040254,https://api.github.com/repos/pydata/xarray/issues/683,682040254,MDEyOklzc3VlQ29tbWVudDY4MjA0MDI1NA==,6213168,2020-08-27T16:01:16Z,2020-08-27T16:01:16Z,MEMBER,Indeed we can. Strong +1 for having a note in the main xarray documentation too!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,123923598
https://github.com/pydata/xarray/pull/4296#issuecomment-672816445,https://api.github.com/repos/pydata/xarray/issues/4296,672816445,MDEyOklzc3VlQ29tbWVudDY3MjgxNjQ0NQ==,6213168,2020-08-12T11:30:30Z,2020-08-12T11:30:30Z,MEMBER,@max-sixty definitely unrelated,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671108068
https://github.com/pydata/xarray/pull/4296#issuecomment-672377186,https://api.github.com/repos/pydata/xarray/issues/4296,672377186,MDEyOklzc3VlQ29tbWVudDY3MjM3NzE4Ng==,6213168,2020-08-12T00:04:35Z,2020-08-12T00:04:35Z,MEMBER,"> To confirm, this is still ""the oldest version released within the time period"", rather than ""the version that existed at the start of the time period""
Yes, this hasn't changed.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671108068
https://github.com/pydata/xarray/issues/4295#issuecomment-671842738,https://api.github.com/repos/pydata/xarray/issues/4295,671842738,MDEyOklzc3VlQ29tbWVudDY3MTg0MjczOA==,6213168,2020-08-11T09:39:05Z,2020-08-11T09:39:05Z,MEMBER,pandas is really unstable and its API breaks every other version. Extending its support window from 1 to 2 years would be extremely expensive and frustrating to maintain.,"{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-671816252,https://api.github.com/repos/pydata/xarray/issues/4295,671816252,MDEyOklzc3VlQ29tbWVudDY3MTgxNjI1Mg==,6213168,2020-08-11T08:45:02Z,2020-08-11T08:45:02Z,MEMBER,"Discussion seems to have died down here. Can we get to a consensus and wrap this up?
My vote is to simply require setuptools >= 38.4 at runtime (for which PR https://github.com/pydata/xarray/pull/4296 is ready to go).","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-668872362,https://api.github.com/repos/pydata/xarray/issues/4295,668872362,MDEyOklzc3VlQ29tbWVudDY2ODg3MjM2Mg==,6213168,2020-08-04T23:13:20Z,2020-08-04T23:13:20Z,MEMBER,"> It's not clear from the OP how they were installing -- i.e. from wheels or source, but if wheels, then pushing teh run time dependency back would fix it.
I don't think we should be discussing a solution that works on wheels but breaks on sources...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-668866899,https://api.github.com/repos/pydata/xarray/issues/4295,668866899,MDEyOklzc3VlQ29tbWVudDY2ODg2Njg5OQ==,6213168,2020-08-04T22:59:29Z,2020-08-04T22:59:29Z,MEMBER,"Ubuntu 18.04 ships Python 3.6.5 and setuptools 39.0.
Ubuntu 16.04 ships Python 3.5 so it's not to be taken into consideration anyway.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-668865267,https://api.github.com/repos/pydata/xarray/issues/4295,668865267,MDEyOklzc3VlQ29tbWVudDY2ODg2NTI2Nw==,6213168,2020-08-04T22:55:12Z,2020-08-04T22:55:12Z,MEMBER,"> My preference would be to say that we support setuptools 30.3 and newer, even if we can't test it
I have tested that setuptools < 36.7 breaks setuptools-scm; the installed version becomes 0.0.0 which in turns breaks any other package that contains a minimum version check (namely, pandas).
Also, I think we agreed when we implemented NEP29 that **we should not support Python 3.6.0**, but only the latest patch version for any given minor version of a package. Python 3.6.11 (released 1 month ago) is shipped with setuptools 40.6.
Any pip or conda-based environment can trivially upgrade from Python 3.6.0 to 3.6.11.
The only users that have problems with getting setuptools >=38.4 (2.5 years old!!!) are those that use /usr/bin/python3 from a very old Linux distribution, which for some reason never got the patch updates of Python, AND expect everything to be compatible with the very latest python packages freshly downloaded from the internet. I mean, seriously?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/pull/4299#issuecomment-667814642,https://api.github.com/repos/pydata/xarray/issues/4299,667814642,MDEyOklzc3VlQ29tbWVudDY2NzgxNDY0Mg==,6213168,2020-08-03T05:41:50Z,2020-08-03T05:41:50Z,MEMBER,@max-sixty afraid it is ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671561223
https://github.com/pydata/xarray/issues/4295#issuecomment-667659338,https://api.github.com/repos/pydata/xarray/issues/4295,667659338,MDEyOklzc3VlQ29tbWVudDY2NzY1OTMzOA==,6213168,2020-08-02T11:01:12Z,2020-08-02T11:01:12Z,MEMBER,"> importlib.resources (available since 3.7) and importlib.metadata (available since 3.8). Both also have backports (importlib-resources and importlib-metadata), so we should be able to get rid of the install-dependency on setuptools.
-1 from me, because dependencies that are only required on a specific Python version are incompatible with noarch conda recipes. This would force us to change conda to build one package for each OS x python version.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4285#issuecomment-667637217,https://api.github.com/repos/pydata/xarray/issues/4285,667637217,MDEyOklzc3VlQ29tbWVudDY2NzYzNzIxNw==,6213168,2020-08-02T06:56:23Z,2020-08-02T06:56:23Z,MEMBER,"I think that xarray should offer a ""compatibility test toolkit"" to any numpy-like, NEP18-compatible library that wants to integrate with it.
Instead of having a module full of tests specifically for pint, one for sparse, one for cupy, one for awkward, etc. etc. etc. those projects could just write a minimal test module like this:
```python
import xarray
import sparse
xarray.testing.test_nep18_module(
sparse,
# TODO: lambda to create an array
# TODO: list of xfails
)
```
which would automatically expand into a comprehensive suite of tests thanks to pytest parameterize/fixture magic.
this would allow developers of numpy-like libraries to just test their package vs what's expected from a generic NEP-18 compliant package.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,667864088
https://github.com/pydata/xarray/issues/4295#issuecomment-667589873,https://api.github.com/repos/pydata/xarray/issues/4295,667589873,MDEyOklzc3VlQ29tbWVudDY2NzU4OTg3Mw==,6213168,2020-08-01T21:34:05Z,2020-08-01T21:34:05Z,MEMBER,PR ready for review,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4294#issuecomment-667589671,https://api.github.com/repos/pydata/xarray/issues/4294,667589671,MDEyOklzc3VlQ29tbWVudDY2NzU4OTY3MQ==,6213168,2020-08-01T21:31:53Z,2020-08-01T21:31:53Z,MEMBER,"I am getting the same error plus another:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/crusaderky/PycharmProjects/tmp/dist/t1/distributed/distributed.yaml'
[5
Both xarray and distributed work fine with ``pip install``, ``python setup.py sdist``, and ``python setup.py bdist``. Please open a ticket on the pyinstaller board.
I tried re-adding the static files in setup.py but it doesn't fix the issue.
I'm opening a PR to stop loading the resource files if you're not running on jupyter notebook. This will allow pure xarray (without dask) to work with pyinstaller as long as you don't display an xarray object in jupyter.
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,670755564
https://github.com/pydata/xarray/issues/4287#issuecomment-667583607,https://api.github.com/repos/pydata/xarray/issues/4287,667583607,MDEyOklzc3VlQ29tbWVudDY2NzU4MzYwNw==,6213168,2020-08-01T20:31:36Z,2020-08-01T20:31:36Z,MEMBER,Temporarily pinning pandas=1.0 in https://github.com/pydata/xarray/pull/4296,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,668166816
https://github.com/pydata/xarray/issues/4295#issuecomment-667578007,https://api.github.com/repos/pydata/xarray/issues/4295,667578007,MDEyOklzc3VlQ29tbWVudDY2NzU3ODAwNw==,6213168,2020-08-01T19:40:16Z,2020-08-01T19:40:16Z,MEMBER,"The key problem in ""as-old-as-they-can-be"" is that you end up with dependencies *that depend on each other* and are 1 year apart in release date. Since very frequently other projects are a lot less rigorous with testing vs old dependencies (if they test at all!) that has caused an endless amount of breakages in the past. Testing with all packages as of 1 year ago is a lot less bug-prone and time-wasting.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-667576666,https://api.github.com/repos/pydata/xarray/issues/4295,667576666,MDEyOklzc3VlQ29tbWVudDY2NzU3NjY2Ng==,6213168,2020-08-01T19:27:21Z,2020-08-01T19:27:21Z,MEMBER,"setuptools-scm doesn't work with setuptools < 36.7 (Nov 2017).
The conda metadata is malformed for setuptools < 38.4 (Jan 2018) - it's missing a timestamp which prevents the minimum versions tool from working.
Is everybody happy with >= 38.4?","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-667575675,https://api.github.com/repos/pydata/xarray/issues/4295,667575675,MDEyOklzc3VlQ29tbWVudDY2NzU3NTY3NQ==,6213168,2020-08-01T19:19:11Z,2020-08-01T19:19:11Z,MEMBER,"> then you should be testing with-as-old-as-they-can-be versions
We used to do that and we abandoned that policy in favour of the current rolling window, because it made developers (particularly the less experienced ones) waste a considerable amount of effort retaining backwards compatibility with obsolete versions of the dependencies that nobody cared about.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4295#issuecomment-667569885,https://api.github.com/repos/pydata/xarray/issues/4295,667569885,MDEyOklzc3VlQ29tbWVudDY2NzU2OTg4NQ==,6213168,2020-08-01T18:24:22Z,2020-08-01T18:24:22Z,MEMBER,"> I was surprised to see this in our setup.cfg file, added by @crusaderky in #3628. The version requirement is not documented in our docs.
It is documented:
https://xarray.pydata.org/en/stable/installing.html#minimum-dependency-versions
> xarray adopts a rolling policy regarding the minimum supported version of its dependencies:
> [...]
> all other libraries: 6 months
The requirement is explicitly set in setup.cfg because *don't ship what you don't test*.
I see no problem in explicitly adding a special case to the policy for setuptools - I guess 24 months should be fine for all? I do not recommend just going back to ""whatever the very first version that works"" as we were doing before the introduction of the rolling policy.
I'm preparing a PR...
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,671019427
https://github.com/pydata/xarray/issues/4208#issuecomment-656068407,https://api.github.com/repos/pydata/xarray/issues/4208,656068407,MDEyOklzc3VlQ29tbWVudDY1NjA2ODQwNw==,6213168,2020-07-09T11:18:15Z,2020-07-09T11:19:28Z,MEMBER,"> Is it acceptable for a Pint Quantity to always have the Dask collection interface defined (i.e., be a duck Dask array), even when its magnitude (what it wraps) is not a Dask Array?
I think there are already enough headaches with ``__iter__`` being always defined and confusing libraries such as pandas (https://github.com/hgrecco/pint/issues/1128).
I don't see why pint should be explicitly aware of dask (except in unit tests)? It should only deal with generic NEP18-compatible libraries (numpy, dask, sparse, cupy, etc.).
> How should xarray check for a duck Dask Array?
We should ask the dask team to formalize what defines a ""dask-array-like"", like they already did with dask collections, and implement their definition in xarray.
I'd personally make it ""whatever defines a numpy-array-like AND has a chunks method AND the chunks method returns a tuple"".","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,653430454
https://github.com/pydata/xarray/pull/4175#issuecomment-650785182,https://api.github.com/repos/pydata/xarray/issues/4175,650785182,MDEyOklzc3VlQ29tbWVudDY1MDc4NTE4Mg==,6213168,2020-06-28T15:54:44Z,2020-06-28T15:54:44Z,MEMBER,"> Actually, I'm just bad at reading. Our policy is ""the minor version (X.Y) initially published no more than N months ago"", which is correctly implemented by the code.
>
> I think we might want to _change_ the policy, but that's a different matter....
Yes, the policy works well in NEP-29 for numpy and python, but can be problematic for seldom-updated packages","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,644693116
https://github.com/pydata/xarray/issues/2027#issuecomment-645498777,https://api.github.com/repos/pydata/xarray/issues/2027,645498777,MDEyOklzc3VlQ29tbWVudDY0NTQ5ODc3Nw==,6213168,2020-06-17T17:01:29Z,2020-06-17T17:01:29Z,MEMBER,Still relevant,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,309686915
https://github.com/pydata/xarray/pull/4144#issuecomment-644639292,https://api.github.com/repos/pydata/xarray/issues/4144,644639292,MDEyOklzc3VlQ29tbWVudDY0NDYzOTI5Mg==,6213168,2020-06-16T09:10:07Z,2020-06-16T09:10:07Z,MEMBER,@nbren12 it seems to me that mypy is being overly aggressive when parsing the hinted code (hence why I had to put ``# type: ignore`` on it) but it is being more lax when the same code is invoked somewhere else like in my test script. Overall I suspect it may be fragile and break in future mypy versions...,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,636611699
https://github.com/pydata/xarray/pull/4144#issuecomment-643655275,https://api.github.com/repos/pydata/xarray/issues/4144,643655275,MDEyOklzc3VlQ29tbWVudDY0MzY1NTI3NQ==,6213168,2020-06-13T17:44:43Z,2020-06-13T17:44:43Z,MEMBER,"I took the liberty to rework it, please have a look
Test script:
```python
from typing import Hashable, Mapping
import xarray
ds: xarray.Dataset
class D(Hashable, Mapping):
def __hash__(self): ...
def __getitem__(self, item): ...
def __iter__(self): ...
def __len__(self): ...
reveal_type(ds[""foo""])
reveal_type(ds[[""foo"", ""bar""]])
reveal_type(ds[{}])
reveal_type(ds[D()])
```
mypy output:
```
t1.py:12: note: Revealed type is 'xarray.core.dataarray.DataArray'
t1.py:13: note: Revealed type is 'xarray.core.dataset.Dataset'
t1.py:14: note: Revealed type is 'xarray.core.dataset.Dataset'
t1.py:15: note: Revealed type is 'xarray.core.dataset.Dataset'
```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,636611699
https://github.com/pydata/xarray/pull/3824#issuecomment-624510237,https://api.github.com/repos/pydata/xarray/issues/3824,624510237,MDEyOklzc3VlQ29tbWVudDYyNDUxMDIzNw==,6213168,2020-05-06T08:23:35Z,2020-05-06T08:23:35Z,MEMBER,"LGTM, ready to merge whenever","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,575078455
https://github.com/pydata/xarray/pull/4012#issuecomment-620475061,https://api.github.com/repos/pydata/xarray/issues/4012,620475061,MDEyOklzc3VlQ29tbWVudDYyMDQ3NTA2MQ==,6213168,2020-04-28T08:57:46Z,2020-04-28T08:57:46Z,MEMBER,"I went through everything and it seems all fine.
I'm happy to merge as soon as you add a line to the What's New","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,607814501
https://github.com/pydata/xarray/pull/4012#issuecomment-620461895,https://api.github.com/repos/pydata/xarray/issues/4012,620461895,MDEyOklzc3VlQ29tbWVudDYyMDQ2MTg5NQ==,6213168,2020-04-28T08:32:08Z,2020-04-28T08:32:08Z,MEMBER,LGTM - can't wait for the tool to mature!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,607814501
https://github.com/pydata/xarray/pull/3989#issuecomment-617160140,https://api.github.com/repos/pydata/xarray/issues/3989,617160140,MDEyOklzc3VlQ29tbWVudDYxNzE2MDE0MA==,6213168,2020-04-21T12:53:00Z,2020-04-21T12:53:00Z,MEMBER,"LGTM. I'm the author of the upstream change - apologies, I did not expect the gen_cluster decorator to be used in downstream projects.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,603937718
https://github.com/pydata/xarray/issues/3967#issuecomment-613054712,https://api.github.com/repos/pydata/xarray/issues/3967,613054712,MDEyOklzc3VlQ29tbWVudDYxMzA1NDcxMg==,6213168,2020-04-13T19:26:28Z,2020-04-13T19:26:28Z,MEMBER,"What you're asking for has two huge blocker dependencies:
- Type annotations for numpy: https://github.com/numpy/numpy/issues/7370
- The ability to define TypedDict-like annotations for a generic MutableMapping subclass. Such a feature should be suggested to the python core devs.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,598991028
https://github.com/pydata/xarray/issues/1699#issuecomment-609650053,https://api.github.com/repos/pydata/xarray/issues/1699,609650053,MDEyOklzc3VlQ29tbWVudDYwOTY1MDA1Mw==,6213168,2020-04-06T08:26:13Z,2020-04-06T08:26:13Z,MEMBER,still relevant,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,272004812
https://github.com/pydata/xarray/issues/3891#issuecomment-604665945,https://api.github.com/repos/pydata/xarray/issues/3891,604665945,MDEyOklzc3VlQ29tbWVudDYwNDY2NTk0NQ==,6213168,2020-03-26T20:24:23Z,2020-03-26T20:24:23Z,MEMBER,"@shoyer to me this it would make the most sense to do a union of the inputs:
- if a key is present only in one input, it goes to the output
- if a key is present in multiple inputs, always take the leftmost
Note how this would be different from how scalar coords are treated; scalar coords are discarded when they arrive from multiple inputs and are mismatched. The reason I don't think it's wise to do the same with attrs is that it could be uncontrollably expensive to compute equality, depending on what people loaded in them. I've personally seen them used as back-references to the whole application framework. Also there's no guarantee that they implement ``__eq__`` or that it returns a bool; e.g. you can't compare two data structures that somewhere inside contain numpy arrays.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,587895591
https://github.com/pydata/xarray/issues/3891#issuecomment-604310958,https://api.github.com/repos/pydata/xarray/issues/3891,604310958,MDEyOklzc3VlQ29tbWVudDYwNDMxMDk1OA==,6213168,2020-03-26T09:03:50Z,2020-03-26T09:03:50Z,MEMBER,"Why would you want a ``.drop_attrs()`` method? ``.attrs.clear()`` will do just fine.
I fully agree we should keep attrs by default.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,587895591
https://github.com/pydata/xarray/issues/3863#issuecomment-600084031,https://api.github.com/repos/pydata/xarray/issues/3863,600084031,MDEyOklzc3VlQ29tbWVudDYwMDA4NDAzMQ==,6213168,2020-03-17T13:53:50Z,2020-03-17T13:53:50Z,MEMBER,"This has nothing to do with to_netcdf or slicing. Your upstream data is broken for the variable ``specific_humidity_ml``.
When you invoke to_netcdf(), the remote resources actually holding the variable(s) are requested for the first time, and the server is responding 404.
Just load the variable into memory:
```python
>>> fnx.specific_humidity_ml.compute()
RuntimeError: NetCDF: file not found
```
Please reopen if you have reason to believe the issue is in the xarray pydap driver and not in the source data.
P.S. all your lines invoking to_netcdf could be replaced with a single command:
```python
fnx[
[
""forecast_reference_time"",
""p0"",
""ap"",
""b"",
""projection_lambert"",
""ozone_profile_c"",
""specific_humidity_ml"",
]
].isel(time = slice(0,48)).to_netcdf('meps_out.nc', 'w', format = 'NETCDF4')
```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,583001763
https://github.com/pydata/xarray/pull/3812#issuecomment-594051281,https://api.github.com/repos/pydata/xarray/issues/3812,594051281,MDEyOklzc3VlQ29tbWVudDU5NDA1MTI4MQ==,6213168,2020-03-03T16:49:23Z,2020-03-03T16:49:23Z,MEMBER,@keewis I'll have a look,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,573007307
https://github.com/pydata/xarray/issues/3213#issuecomment-592476821,https://api.github.com/repos/pydata/xarray/issues/3213,592476821,MDEyOklzc3VlQ29tbWVudDU5MjQ3NjgyMQ==,6213168,2020-02-28T11:39:50Z,2020-02-28T11:39:50Z,MEMBER,"*xr.apply_ufunc(sparse.COO, ds, dask='parallelized')*
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,479942077
https://github.com/pydata/xarray/issues/2028#issuecomment-592475338,https://api.github.com/repos/pydata/xarray/issues/2028,592475338,MDEyOklzc3VlQ29tbWVudDU5MjQ3NTMzOA==,6213168,2020-02-28T11:35:04Z,2020-02-28T11:35:04Z,MEMBER,Still relevant,"{""total_count"": 8, ""+1"": 8, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,309691307
https://github.com/pydata/xarray/issues/3806#issuecomment-592474662,https://api.github.com/repos/pydata/xarray/issues/3806,592474662,MDEyOklzc3VlQ29tbWVudDU5MjQ3NDY2Mg==,6213168,2020-02-28T11:33:03Z,2020-02-28T11:33:03Z,MEMBER,+1,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,572295802
https://github.com/pydata/xarray/issues/3786#issuecomment-589708365,https://api.github.com/repos/pydata/xarray/issues/3786,589708365,MDEyOklzc3VlQ29tbWVudDU4OTcwODM2NQ==,6213168,2020-02-21T15:42:36Z,2020-02-21T15:42:36Z,MEMBER,"> This also means that either the new array is no longer C-contiguous, or the .unstack() operation has had to copy all the data to rearrange it.
The former. As a core design principle, xarray does not care about dimensions order, and any user code that implicitly relies on it should be considered a bad design.
The .transpose() method mostly only exists for when people need to access the numpy .data object directly with a numpy function.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,568968607
https://github.com/pydata/xarray/issues/3213#issuecomment-587564478,https://api.github.com/repos/pydata/xarray/issues/3213,587564478,MDEyOklzc3VlQ29tbWVudDU4NzU2NDQ3OA==,6213168,2020-02-18T16:58:25Z,2020-02-18T16:58:25Z,MEMBER,"you just need to
1. load up your NetCDF files with *xarray.open_mfdataset*. This will give
you
- an xarray.Dataset,
- that wraps around one dask.array.Array per variable,
- that wrap around one numpy.ndarray (DENSE array) per dask chunk.
2. convert to sparse with *xarray.apply_ufunc(sparse.COO, ds)*.
This will give you
- an xarray.Dataset,
- that wraps around one dask.array.Array per variable,
- that wrap around one sparse.COO (SPARSE array) per dask chunk.
3. use xarray.merge or whatever to align and merge
4. you may want to rechunk at this point to obtain less, larger chunks. You
can estimate your chunk size in bytes if you know your data density (read
my previous email).
5. Do whatever other calculations you want. All operations will produce in
output the same data type as point 2.
4. To go back to dense, invoke *xarray.apply_ufunc(lambda x: x.todense(),
ds)* to go back to the format as in (1). This step is only necessary if you
have something that won't accept/recognize sparse arrays directly in input;
namely, writing to a NetCDF dataset. If your data has not been reduced
enough, you may need to rechunk into smaller chunks first in order to fit
into your RAM constraints.
Regards
On Tue, 18 Feb 2020 at 13:56, fmfreeze wrote:
> Thank you @crusaderky for your input.
>
> I understand and agree with your statements for sparse data files.
> My approach is different, because within my (hdf5) data files on disc, I
> have no sparse datasets at all.
>
> But as I combine two differently sampled xarray dataset (initialized by
> h5py > dask > xarray) with xarrays built-in top-level function
> ""xarray.merge()"" (resp. xarray.combine_by_coords()), the resulting dataset
> is sparse.
>
> Generally that is nice behaviour, because two differently sampled datasets
> get aligned along a coordinate/dimension, and the gaps are filled by NaNs.
>
> Nevertheless, thos NaN ""gaps"" seem to need memory for every single NaN.
> That is what should be avoided.
> Maybe by implementing a redundant pointer to the same memory adress for
> each NaN?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> ,
> or unsubscribe
>
> .
>
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,479942077
https://github.com/pydata/xarray/issues/2459#issuecomment-586139738,https://api.github.com/repos/pydata/xarray/issues/2459,586139738,MDEyOklzc3VlQ29tbWVudDU4NjEzOTczOA==,6213168,2020-02-14T07:50:08Z,2020-02-14T07:50:47Z,MEMBER,"@tqfjo unrelated. You're comparing the creation of a dataset with 2 variables with the creation of one with 3000. Unsurprisingly, the latter will take 1500x. If your dataset doesn't functionally contain 3000 variables but just a single two-dimensional variable, use ``xarray.DataArray(ds)``.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,365973662
https://github.com/pydata/xarray/issues/3213#issuecomment-585997533,https://api.github.com/repos/pydata/xarray/issues/3213,585997533,MDEyOklzc3VlQ29tbWVudDU4NTk5NzUzMw==,6213168,2020-02-13T22:12:37Z,2020-02-13T22:12:37Z,MEMBER,"Hi fmfreeze,
*> Dask integration enables xarray to scale to big data, only as long as
the data has no sparse character*. Do you agree on that formulation or am I
missing something fundamental?
I don't agree. To my understanding xarray->dask->sparse works very well
(save bugs), *as long as your data density *(the percentage of non-default
points)* is roughly constant across dask chunk*s.
If it isn't, then you'll have some chunks that consume substantially more
RAM and CPU to compute than others. This can be mitigated, if you know in
advance where you are going to have more samples, by setting uneven dask
chunk sizes. For example, if you have a one-dimensional array of 100k
points and you know in advance that the density of non-default samples
follows a gaussian or triangular distribution, then it may be wise to have
very large chunks at the tails and then get them progressively smaller
towards the center, e.g. (30k, 12k, 5k, 2k, 1k, 1k, 2k, 5k, 10k, 30k).
Of course, there are use cases where you're going to have unpredictable
hotspots; I'm afraid that in those the only thing you can do is size your
chunks for the worst case and end up oversplitting everywhere else.
Regards
Guido
On Thu, 13 Feb 2020 at 10:55, fmfreeze wrote:
> Thank you all for making xarray and its tight development with dask so
> great!
>
> As @shoyer mentioned
>
> Yes, it would be useful (eventually) to have lazy loading of sparse arrays
> from disk, like we want we currently do for dense arrays. This would indeed
> require knowing that the indices are sorted.
>
> I am wondering, if creating a *lazy* & *sparse* xarray Dataset/DataArray
> is already possible?
> Especially when *creating* the sparse part at runtime, and *loading* only
> the data part:
> Assume two differently sampled - and lazy dask - DataArrays are
> merged/combined along a coordinate axis into a Dataset.
> Then the smaller (= less dense) DataVariable is filled with NaNs. As far
> as I experienced the current behaviour is, that each NaN value requires
> memory.
>
> That issue might be formulated this way:
> *Dask integration enables xarray to scale to big data, only as long as the
> data has no sparse character*. Do you agree on that formulation or am I
> missing something fundamental?
>
> A code example reproducing that issue is described here:
> https://stackoverflow.com/q/60117268/9657367
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> ,
> or unsubscribe
>
> .
>
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,479942077
https://github.com/pydata/xarray/issues/3763#issuecomment-583783815,https://api.github.com/repos/pydata/xarray/issues/3763,583783815,MDEyOklzc3VlQ29tbWVudDU4Mzc4MzgxNQ==,6213168,2020-02-08T22:39:54Z,2020-02-08T22:39:54Z,MEMBER,"Hi Scott,
I can't think of a generic situation where text labels have a numerical
weight that is hardcoded to their position on the alphabet, e.g. mean(""A"",
""C"") = ""B"".
What one typically does is map the labels (any string) to their (arbitrary)
weights, interpolate the weights, and then do a nearest-neighbour
interpolation (or floor or ceil, depending on the preference) back to the
label. Which is what you described but with the special caveat that your
weights are the ASCII codes for your labels.
On Sat, 8 Feb 2020 at 20:43, scottcanoe wrote:
> I'd like to suggest an improvement to enable a repeat-based interpolation
> mechanism for non-numerical data. In my use case, I have time series data
> (dim='t'), where each timepoint is associated with a measured variable
> (e.g., fluorescence) as well as a label indicating the stimulus being
> presented (e.g., ""A""). However, if and when I need to upsample my data, the
> string-valued stimulus information is lost, and its imperative that the
> stimulus information is still present when working on the resampled data.
>
> My solution to this problem has been to map the labels to integers, use
> nearest-neighbor interpolation on the integer-valued representation, and
> finally map the integers back to labels. (I'm willing to bet there's a name
> for this technique, but I wasn't able to find it by googling around for it.)
>
> I'm new to xarray, but so far as I can tell this functionality is not
> provided. More specifically, calling DataArray.interp on a string-valued
> array results in a type error ( a numeric type array. Given ).
>
> Finally, I'd like to applaud you for your work on xarray. I only wish I
> had found it sooner!
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> ,
> or unsubscribe
>
> .
>
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,562075354
https://github.com/pydata/xarray/issues/3702#issuecomment-580833271,https://api.github.com/repos/pydata/xarray/issues/3702,580833271,MDEyOklzc3VlQ29tbWVudDU4MDgzMzI3MQ==,6213168,2020-01-31T17:38:07Z,2020-01-31T17:38:07Z,MEMBER,"```
ERROR: Could not find a version that satisfies the requirement setuptools_scm (from versions: none)
ERROR: No matching distribution found for setuptools_scm
```
Should get fixed by changing recipe/meta.yaml:
```yaml
requirements:
build:
- python >=3.6
- pip
- setuptools
- setuptools_scm
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,551484736
https://github.com/pydata/xarray/pull/3727#issuecomment-580675172,https://api.github.com/repos/pydata/xarray/issues/3727,580675172,MDEyOklzc3VlQ29tbWVudDU4MDY3NTE3Mg==,6213168,2020-01-31T10:22:33Z,2020-01-31T10:22:33Z,MEMBER,"Ready for review and merge.
TODO After merging in master:
- check rtd build
- check binder","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,557020666
https://github.com/pydata/xarray/pull/3628#issuecomment-580414183,https://api.github.com/repos/pydata/xarray/issues/3628,580414183,MDEyOklzc3VlQ29tbWVudDU4MDQxNDE4Mw==,6213168,2020-01-30T19:25:05Z,2020-01-30T19:25:05Z,MEMBER,"@dcherian I've heavily changed this, please give it a second read and merge if you're happy with it","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,538422641
https://github.com/pydata/xarray/pull/3727#issuecomment-580319584,https://api.github.com/repos/pydata/xarray/issues/3727,580319584,MDEyOklzc3VlQ29tbWVudDU4MDMxOTU4NA==,6213168,2020-01-30T15:55:24Z,2020-01-30T15:55:24Z,MEMBER,I have no clue whatsoever what's going on with docs?!? I didn't change it in any way and it's just... freezing?!?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,557020666
https://github.com/pydata/xarray/pull/3626#issuecomment-580273606,https://api.github.com/repos/pydata/xarray/issues/3626,580273606,MDEyOklzc3VlQ29tbWVudDU4MDI3MzYwNg==,6213168,2020-01-30T14:17:25Z,2020-01-30T14:17:25Z,MEMBER,"Ok, I've played with it a bit and it's a hard -1 from me.
- I could not find a way to run it locally (correct me if I'm wrong); meaning all PRs will need to go through a million of trial-and-error commits and pushes
- Very heavily overlapping with what we already have (flake8, black, isort, mypy)
- Will trigger on issues that are explictly (made) ok for the other linter tools
","{""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,538200559
https://github.com/pydata/xarray/pull/3626#issuecomment-580234245,https://api.github.com/repos/pydata/xarray/issues/3626,580234245,MDEyOklzc3VlQ29tbWVudDU4MDIzNDI0NQ==,6213168,2020-01-30T12:40:47Z,2020-01-30T12:40:47Z,MEMBER,"If you merge from master you should get all green on the upstream-dev test now
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,538200559
https://github.com/pydata/xarray/pull/3640#issuecomment-580231944,https://api.github.com/repos/pydata/xarray/issues/3640,580231944,MDEyOklzc3VlQ29tbWVudDU4MDIzMTk0NA==,6213168,2020-01-30T12:33:58Z,2020-01-30T12:33:58Z,MEMBER,if you merge from master you should get all green on the upstream-dev test now,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",,539394615
https://github.com/pydata/xarray/pull/3643#issuecomment-580231857,https://api.github.com/repos/pydata/xarray/issues/3643,580231857,MDEyOklzc3VlQ29tbWVudDU4MDIzMTg1Nw==,6213168,2020-01-30T12:33:45Z,2020-01-30T12:33:45Z,MEMBER,if you merge from master you should get all green on the upstream-dev test now,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,539988974
https://github.com/pydata/xarray/pull/3728#issuecomment-579892184,https://api.github.com/repos/pydata/xarray/issues/3728,579892184,MDEyOklzc3VlQ29tbWVudDU3OTg5MjE4NA==,6213168,2020-01-29T18:21:52Z,2020-01-29T18:21:52Z,MEMBER,I think a unit test for the use case is in order?,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,557023511
https://github.com/pydata/xarray/pull/3628#issuecomment-579887657,https://api.github.com/repos/pydata/xarray/issues/3628,579887657,MDEyOklzc3VlQ29tbWVudDU3OTg4NzY1Nw==,6213168,2020-01-29T18:10:45Z,2020-01-29T18:10:45Z,MEMBER,"setuptools is not a runtime dependency since https://github.com/pydata/xarray/pull/3720
Also, I'm confused - both conda and cpython always come with setuptools preinstalled. How did you manage to get an environment without it?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,538422641
https://github.com/pydata/xarray/pull/3707#issuecomment-579865592,https://api.github.com/repos/pydata/xarray/issues/3707,579865592,MDEyOklzc3VlQ29tbWVudDU3OTg2NTU5Mg==,6213168,2020-01-29T17:20:37Z,2020-01-29T17:20:37Z,MEMBER,agree,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,551727037
https://github.com/pydata/xarray/pull/3724#issuecomment-578896237,https://api.github.com/repos/pydata/xarray/issues/3724,578896237,MDEyOklzc3VlQ29tbWVudDU3ODg5NjIzNw==,6213168,2020-01-27T18:52:14Z,2020-01-27T18:52:14Z,MEMBER,It works https://github.com/pydata/xarray/network/dependencies,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,555752381
https://github.com/pydata/xarray/pull/3713#issuecomment-578312410,https://api.github.com/repos/pydata/xarray/issues/3713,578312410,MDEyOklzc3VlQ29tbWVudDU3ODMxMjQxMA==,6213168,2020-01-24T21:39:46Z,2020-01-24T21:39:46Z,MEMBER,We could increase the support window for seldom-updated packages (read: not dask) to 1 year. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,552994673
https://github.com/pydata/xarray/pull/3713#issuecomment-578288717,https://api.github.com/repos/pydata/xarray/issues/3713,578288717,MDEyOklzc3VlQ29tbWVudDU3ODI4ODcxNw==,6213168,2020-01-24T20:28:25Z,2020-01-24T20:28:25Z,MEMBER,I have no problems with supporting versions older than those mandated by the policy as long as there isn't any major benefit in dropping them... ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,552994673
https://github.com/pydata/xarray/pull/3721#issuecomment-578165339,https://api.github.com/repos/pydata/xarray/issues/3721,578165339,MDEyOklzc3VlQ29tbWVudDU3ODE2NTMzOQ==,6213168,2020-01-24T14:59:40Z,2020-01-24T14:59:40Z,MEMBER,@max-sixty that's what PULL_REQUEST_TEMPLATE.md is for...,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,554662467
https://github.com/pydata/xarray/pull/3721#issuecomment-578153657,https://api.github.com/repos/pydata/xarray/issues/3721,578153657,MDEyOklzc3VlQ29tbWVudDU3ODE1MzY1Nw==,6213168,2020-01-24T14:31:24Z,2020-01-24T14:31:24Z,MEMBER,@max-sixty I wouldn't 100% advise using such level of automation for isort. Have a look at the ``# isort:skip`` tags in our codebase; they're all cases where isort would otherwise break the code.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,554662467
https://github.com/pydata/xarray/pull/3721#issuecomment-578084177,https://api.github.com/repos/pydata/xarray/issues/3721,578084177,MDEyOklzc3VlQ29tbWVudDU3ODA4NDE3Nw==,6213168,2020-01-24T10:59:21Z,2020-01-24T10:59:21Z,MEMBER,Demo isort CI in action: https://github.com/pydata/xarray/pull/3721/checks?check_run_id=406904833,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,554662467
https://github.com/pydata/xarray/pull/3714#issuecomment-578060759,https://api.github.com/repos/pydata/xarray/issues/3714,578060759,MDEyOklzc3VlQ29tbWVudDU3ODA2MDc1OQ==,6213168,2020-01-24T09:46:20Z,2020-01-24T09:46:20Z,MEMBER,"@keewis interesting question. setuptools is technically not part of the Python standard library, but it is part of pip and is always included with the cpython binaries, `conda create python=3.6`, and pypy. So... I'm not sure?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,553518018
https://github.com/pydata/xarray/pull/3713#issuecomment-577816539,https://api.github.com/repos/pydata/xarray/issues/3713,577816539,MDEyOklzc3VlQ29tbWVudDU3NzgxNjUzOQ==,6213168,2020-01-23T18:40:29Z,2020-01-23T18:40:29Z,MEMBER,"Because NEP29 states support for all versions in the latest X months, not the most recent version released more than X months ago. ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,552994673
https://github.com/pydata/xarray/pull/3714#issuecomment-577205054,https://api.github.com/repos/pydata/xarray/issues/3714,577205054,MDEyOklzc3VlQ29tbWVudDU3NzIwNTA1NA==,6213168,2020-01-22T14:24:52Z,2020-01-22T14:24:52Z,MEMBER,"Awesome, everything looks in order.
This is now ready for review and merge.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,553518018
https://github.com/pydata/xarray/issues/3697#issuecomment-577197732,https://api.github.com/repos/pydata/xarray/issues/3697,577197732,MDEyOklzc3VlQ29tbWVudDU3NzE5NzczMg==,6213168,2020-01-22T14:08:20Z,2020-01-22T14:08:20Z,MEMBER,The obvious downside is that anybody with a link to one of the internal pages of our documentation will have the link broken. Also I'm unsure how straightforward it will be to rebuild all of our historical versions.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,550335922
https://github.com/pydata/xarray/issues/3697#issuecomment-577197322,https://api.github.com/repos/pydata/xarray/issues/3697,577197322,MDEyOklzc3VlQ29tbWVudDU3NzE5NzMyMg==,6213168,2020-01-22T14:07:25Z,2020-01-22T14:07:25Z,MEMBER,Very glad to upvote anything that rids us of the RTD CI!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,550335922
https://github.com/pydata/xarray/pull/3714#issuecomment-577178942,https://api.github.com/repos/pydata/xarray/issues/3714,577178942,MDEyOklzc3VlQ29tbWVudDU3NzE3ODk0Mg==,6213168,2020-01-22T13:22:29Z,2020-01-22T13:22:29Z,MEMBER,@keewis done,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,553518018
https://github.com/pydata/xarray/pull/3714#issuecomment-577174666,https://api.github.com/repos/pydata/xarray/issues/3714,577174666,MDEyOklzc3VlQ29tbWVudDU3NzE3NDY2Ng==,6213168,2020-01-22T13:10:57Z,2020-01-22T13:10:57Z,MEMBER,"I seem to have no way to test RTD >_<
https://readthedocs.org/projects/crusaderky-xarray/builds/10305870/
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,553518018
https://github.com/pydata/xarray/pull/3714#issuecomment-577168867,https://api.github.com/repos/pydata/xarray/issues/3714,577168867,MDEyOklzc3VlQ29tbWVudDU3NzE2ODg2Nw==,6213168,2020-01-22T12:54:38Z,2020-01-22T12:54:38Z,MEMBER,I need to retest the RTD CI before merging,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,553518018
https://github.com/pydata/xarray/pull/3713#issuecomment-577166188,https://api.github.com/repos/pydata/xarray/issues/3713,577166188,MDEyOklzc3VlQ29tbWVudDU3NzE2NjE4OA==,6213168,2020-01-22T12:51:48Z,2020-01-22T12:51:48Z,MEMBER,Related: #3714 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,552994673
https://github.com/pydata/xarray/issues/3369#issuecomment-576777368,https://api.github.com/repos/pydata/xarray/issues/3369,576777368,MDEyOklzc3VlQ29tbWVudDU3Njc3NzM2OA==,6213168,2020-01-21T16:58:51Z,2020-01-21T16:58:51Z,MEMBER,I've been using setuptools-scm in multiple other projects and it's great!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,502082831
https://github.com/pydata/xarray/pull/3703#issuecomment-575848943,https://api.github.com/repos/pydata/xarray/issues/3703,575848943,MDEyOklzc3VlQ29tbWVudDU3NTg0ODk0Mw==,6213168,2020-01-18T00:58:06Z,2020-01-18T00:58:06Z,MEMBER,I think we're very close to 0.15... ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,551532886
https://github.com/pydata/xarray/pull/3705#issuecomment-575786728,https://api.github.com/repos/pydata/xarray/issues/3705,575786728,MDEyOklzc3VlQ29tbWVudDU3NTc4NjcyOA==,6213168,2020-01-17T20:39:27Z,2020-01-17T20:39:27Z,MEMBER,fixed,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,551544665
https://github.com/pydata/xarray/issues/3696#issuecomment-575073923,https://api.github.com/repos/pydata/xarray/issues/3696,575073923,MDEyOklzc3VlQ29tbWVudDU3NTA3MzkyMw==,6213168,2020-01-16T09:56:08Z,2020-01-16T09:56:08Z,MEMBER,"pickle should never be used, with any library, as a means of long-term storage because it intrinsically relies on implementation details to remain the same across versions. Please use one of the several tools that are made just for that purpose (NetCDF et al.).
xarray already guarantees stability on the *public API* - we typically have a deprecation cycle lasting at least 1 major version. It will never offer stability on the implementation details.
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,550067372
https://github.com/pydata/xarray/issues/3644#issuecomment-567568888,https://api.github.com/repos/pydata/xarray/issues/3644,567568888,MDEyOklzc3VlQ29tbWVudDU2NzU2ODg4OA==,6213168,2019-12-19T16:46:18Z,2019-12-19T16:46:18Z,MEMBER,Let us know if you find evidence of xarray-specific problematic behaviour on the matter,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,540399695
https://github.com/pydata/xarray/issues/3644#issuecomment-567564051,https://api.github.com/repos/pydata/xarray/issues/3644,567564051,MDEyOklzc3VlQ29tbWVudDU2NzU2NDA1MQ==,6213168,2019-12-19T16:33:45Z,2019-12-19T16:33:45Z,MEMBER,I feel this should be opened on the pandas board? xarray should be better off not hacking around the deficiencies of its dependencies...,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,540399695
https://github.com/pydata/xarray/pull/3635#issuecomment-566548997,https://api.github.com/repos/pydata/xarray/issues/3635,566548997,MDEyOklzc3VlQ29tbWVudDU2NjU0ODk5Nw==,6213168,2019-12-17T13:50:11Z,2019-12-17T13:50:11Z,MEMBER,Thank you!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,539059754
https://github.com/pydata/xarray/issues/3634#issuecomment-566511361,https://api.github.com/repos/pydata/xarray/issues/3634,566511361,MDEyOklzc3VlQ29tbWVudDU2NjUxMTM2MQ==,6213168,2019-12-17T11:58:31Z,2019-12-17T11:58:31Z,MEMBER,Looks straightforward - could you open a PR?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,539010474
https://github.com/pydata/xarray/pull/3533#issuecomment-562210176,https://api.github.com/repos/pydata/xarray/issues/3533,562210176,MDEyOklzc3VlQ29tbWVudDU2MjIxMDE3Ng==,6213168,2019-12-05T16:39:29Z,2019-12-05T16:39:29Z,MEMBER,In it goes \*ducks for cover\*,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,522935511
https://github.com/pydata/xarray/pull/3533#issuecomment-561755650,https://api.github.com/repos/pydata/xarray/issues/3533,561755650,MDEyOklzc3VlQ29tbWVudDU2MTc1NTY1MA==,6213168,2019-12-04T17:30:33Z,2019-12-04T17:30:33Z,MEMBER,"Implemented @shoyer 's suggestions and aligned to master.
Merging into master tomorrow!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,522935511