html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/1118#issuecomment-261551610,https://api.github.com/repos/pydata/xarray/issues/1118,261551610,MDEyOklzc3VlQ29tbWVudDI2MTU1MTYxMA==,1310437,2016-11-18T14:58:24Z,2016-11-18T14:58:24Z,CONTRIBUTOR,"With the new changes, this will now conflict with #1128, though easy to solve.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189095110
https://github.com/pydata/xarray/pull/1118#issuecomment-260916046,https://api.github.com/repos/pydata/xarray/issues/1118,260916046,MDEyOklzc3VlQ29tbWVudDI2MDkxNjA0Ng==,1310437,2016-11-16T10:55:00Z,2016-11-16T10:55:00Z,CONTRIBUTOR,"Travis succeeds, though lots of failures under environments with allowed failure. They look unrelated to me, but I find it hard to tell. Appveyor doesn't seem to run the quantities tests, so I guess the requirements there are missing too. Where would I add requirements for Appveyor?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189095110
https://github.com/pydata/xarray/pull/1118#issuecomment-260710056,https://api.github.com/repos/pydata/xarray/issues/1118,260710056,MDEyOklzc3VlQ29tbWVudDI2MDcxMDA1Ng==,1310437,2016-11-15T17:34:15Z,2016-11-15T17:34:15Z,CONTRIBUTOR,"You are right. There seem to be quite a number of varying `requirements`. Should I add it to all of them? Also, I'm not very well versed in Travis-CI `.yml`: What repo are the requirements served from? I think `python-quantities` is in debian. Or should I just go for pip? There hasn't been a release in a long time, github master has progressed quite a bit, but the released version should be compatible too.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189095110
https://github.com/pydata/xarray/pull/1122#issuecomment-260706103,https://api.github.com/repos/pydata/xarray/issues/1122,260706103,MDEyOklzc3VlQ29tbWVudDI2MDcwNjEwMw==,1310437,2016-11-15T17:20:30Z,2016-11-15T17:20:30Z,CONTRIBUTOR,"Unfortunately, I was unable to come up with a good regression test. Interactive testing confirms that the fix is working (no iteration is performed, and the runtime of the example given in #1121 went down from ~1s to 0.3 us).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189451582
https://github.com/pydata/xarray/issues/1121#issuecomment-260704684,https://api.github.com/repos/pydata/xarray/issues/1121,260704684,MDEyOklzc3VlQ29tbWVudDI2MDcwNDY4NA==,1310437,2016-11-15T17:15:25Z,2016-11-15T17:15:25Z,CONTRIBUTOR,"I think I found it (#1122). I guess whenever a non-scalar assignment is made (as in `result[:] = [value]`), something like `np.asanyarray` is performed on the new value. Luckily, numpy is perfectly happy with indexing of a 0d array with a 0d index (i.e. an empty tuple).
Thinking about it, `result[0] = value` would probably have worked too.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189415576
https://github.com/pydata/xarray/issues/1121#issuecomment-260697960,https://api.github.com/repos/pydata/xarray/issues/1121,260697960,MDEyOklzc3VlQ29tbWVudDI2MDY5Nzk2MA==,1310437,2016-11-15T16:52:41Z,2016-11-15T16:52:41Z,CONTRIBUTOR,"Well, xarrays are way too useful not to nest them, even if that involves the scary `dtype=object` :-). Thanks for pointing me in the right direction, I'll try to find a fix.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189415576
https://github.com/pydata/xarray/pull/1119#issuecomment-260339257,https://api.github.com/repos/pydata/xarray/issues/1119,260339257,MDEyOklzc3VlQ29tbWVudDI2MDMzOTI1Nw==,1310437,2016-11-14T13:52:16Z,2016-11-14T14:51:26Z,CONTRIBUTOR,"This fix handles the case `ds['somecoord'].rename({'somecoord':'newcoord'})`. It does not apply to `ds['somecoord'].rename('newcoord')`, since the documentation of `DataArray,rename` does not mention coordinates at all if a string is given as argument. Nevertheless, one could argue the the two should be equivalent, in which case the latter would have to be fixed as well.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,189099082
https://github.com/pydata/xarray/issues/1074#issuecomment-258425477,https://api.github.com/repos/pydata/xarray/issues/1074,258425477,MDEyOklzc3VlQ29tbWVudDI1ODQyNTQ3Nw==,1310437,2016-11-04T13:04:37Z,2016-11-04T13:04:37Z,CONTRIBUTOR,"As for the consistency concern, I wouldn't have expected that to be a big issue. I'd argue that most functions mapping `np.ndarray -> np.ndarray` will not mind receiving a `DataArray` instead. On the other hand, functions mapping `DataArray -> np.ndarray` would seldom prefer to receive the raw `np.ndarray`. So I see no use to the `raw` parameter (but then again, I do not know pandas and their use-case), such that my hypotetical `DataArray.apply` and the existing `DataArray.pipe` are essentially the same.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,186868181
https://github.com/pydata/xarray/issues/1074#issuecomment-258423515,https://api.github.com/repos/pydata/xarray/issues/1074,258423515,MDEyOklzc3VlQ29tbWVudDI1ODQyMzUxNQ==,1310437,2016-11-04T12:55:15Z,2016-11-04T12:55:15Z,CONTRIBUTOR,"Aha! For my use-case, `DataArray.pipe` is perfectly fine, I just didn't know about it. I have to admit that I know nothing about pandas. Before I learned about xarray, pandas was not interesting to me at all. My datasets are often high-dimensional which does not work well with pandas' orientation towards (one-dimensional) collections of observations. In that sense, I could rather relabel this issue (or create a new one) as a documentation problem. The API reference does not indicate the existence of `DataArray.pipe` at all (only `Dataset.pipe`, even though that one mentions it works on `DataArray`s too). Also, there could possibly be a see-also link to `pipe` from `apply`. Shall I have a go at a PR?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,186868181
https://github.com/pydata/xarray/issues/475#issuecomment-256206191,https://api.github.com/repos/pydata/xarray/issues/475,256206191,MDEyOklzc3VlQ29tbWVudDI1NjIwNjE5MQ==,1310437,2016-10-25T23:13:37Z,2016-10-25T23:17:40Z,CONTRIBUTOR,"Really? I get a `ValueError: Indexers must be 1 dimensional` (`xarray/core/dataset.py:1031 in isel_points(self, dim, **indexers)` when I try. That is xarray 0.8.2, in fact from my fork recently cloned (~2-3 weeks ago), where I changed one or two `asarray` to `asanyarray` to work with units. Was there a recent change in this area?
EDIT: `xarray/core/dataset.py` looks very similar also here on master, and there are quite a few lines hinting that really only 1D indexers are supported.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,95114700
https://github.com/pydata/xarray/issues/475#issuecomment-256199958,https://api.github.com/repos/pydata/xarray/issues/475,256199958,MDEyOklzc3VlQ29tbWVudDI1NjE5OTk1OA==,1310437,2016-10-25T22:44:30Z,2016-10-25T22:44:30Z,CONTRIBUTOR,"Without following the discussion in detail, what is the status here? In particular, I would like to do pointwise selection on multiple 1D coordinates using multidimensional indexer arrays. I can do this with the current `isel_points`:
1. construct the multidimensional indexers
2. flatten them
3. create a corresponding `MultiIndex`
4. apply the flattened indexers using `isel_points`, and assign the multi-index as the new dimension
5. use `unstack` on the newly created dimension
The first three points can be somewhat simplified by instead putting all of the multidimensional indexer into a `Dataset` and then `stack` it to create consistent flat versions and their multi-index.
Given this conceptually easy but somewhat tedious procedure, couldn't that be something that could quite easily be implemented into the current `isel_points`? Would a PR along that direction have a chance of being accepted?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,95114700
https://github.com/pydata/xarray/issues/525#issuecomment-248255299,https://api.github.com/repos/pydata/xarray/issues/525,248255299,MDEyOklzc3VlQ29tbWVudDI0ODI1NTI5OQ==,1310437,2016-09-20T09:49:23Z,2016-09-20T09:51:30Z,CONTRIBUTOR,"Or another way to put it: While typical metadata/attributes are only relevant if you eventually read them (which is where you will notice if they were lost on the way), units are different: They work silently behind the scene at all times, even if you do not explicitly look for them. You want an addition to fail if units don't match, without having to explicitly first test if the operands have units. So what should the ufunc_hook do if it finds two Variables that don't seem to carry units, raise an exception? Most probably not, as that would prevent to use xarray at the same time without units. So if the units are lost on the way, you might never notice, but end up with wrong data. To me, that is just not unlikely enough to happen given the damage it can do (e.g. the time it takes to find out what's going on once you realise you get wrong data).
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,100295585
https://github.com/pydata/xarray/issues/525#issuecomment-248255426,https://api.github.com/repos/pydata/xarray/issues/525,248255426,MDEyOklzc3VlQ29tbWVudDI0ODI1NTQyNg==,1310437,2016-09-20T09:50:00Z,2016-09-20T09:50:00Z,CONTRIBUTOR,"So for now, I'm hunting for `np.asarray`.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,100295585
https://github.com/pydata/xarray/issues/525#issuecomment-248252494,https://api.github.com/repos/pydata/xarray/issues/525,248252494,MDEyOklzc3VlQ29tbWVudDI0ODI1MjQ5NA==,1310437,2016-09-20T09:36:24Z,2016-09-20T09:36:24Z,CONTRIBUTOR,"#988 would certainly allow to me to implement unit functionality on xarray, probably by leveraging an existing units package.
What I don't like with that approach is the fact that I essentially end up with a separate distinct implementation of units. I am afraid that I will either have to re-implement many of the helpers that I wrote to work with physical quantities to be xarray aware. Furthermore, one important aspect of units packages is that it prevents you from doing conversion mistakes. But that only works as long as you don't forget to carry the units with you. Having units just as attributes to xarray makes it as simple as forgetting to read the attributes when accessing the data to lose the units.
The units inside xarray approach would have the advantage that whenever you end up accessing the data inside xarray, you automatically have the units with you.
From a conceptual point of view, the units are really an integral part of the data, so they should sit right there with the data. Whenever you do something with the data, you have to deal with the units. That is true no matter if it is implemented as an attribute handler or directly on the data array. My fear is, attributes leave the impression of ""optional"" metadata which are too easily lost. E.g. xarray doesn't call it's _ufunc_hook_ for some operation where it should, and you silently lose units. My hope is that with nested arrays that carry units, you would instead fail verbosely. Of course, `np.concatenate` is precisely one of these cases where unit packages struggle with to get their hook in (and where units on dtypes would help). So they fight the same problem. Nonetheless, these problems are known and solved as well as possible in the units packages, but in xarray, one would have to deal with them all over again.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,100295585
https://github.com/pydata/xarray/issues/525#issuecomment-248059952,https://api.github.com/repos/pydata/xarray/issues/525,248059952,MDEyOklzc3VlQ29tbWVudDI0ODA1OTk1Mg==,1310437,2016-09-19T17:24:21Z,2016-09-19T17:24:21Z,CONTRIBUTOR,"+1 for units support. I agree, parametrised dtypes would be the preferred solution, but I don't want to wait that long (I would be willing to contribute to that end, but I'm afraid that would exceed my knowledge of numpy).
I have never used dask. I understand that the support for dask arrays is a central feature for xarray. However, the way I see it, if one would put a (unit-aware) ndarray subclass into an xarray, then units should work out of the box. As you discussed, this seems not so easy to make work together with dask (particularly in a generic way). However, shouldn't that be an issue that the dask community anyway has to solve (i.e.: currently there is no way to use any units package together with dask, right)? In that sense, allowing such arrays inside xarrays would force users to choose between dask and units, which is something they have to do anyway. But for a big part of users, that would be a very quick way to units!
Or am I missing something here? I'll just try to monkeypatch xarray to that end, and see how far I get...
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,100295585