html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/3117#issuecomment-518320289,https://api.github.com/repos/pydata/xarray/issues/3117,518320289,MDEyOklzc3VlQ29tbWVudDUxODMyMDI4OQ==,1217238,2019-08-05T17:14:15Z,2019-08-05T17:14:15Z,MEMBER,"@nvictus we are good to go ahead and merge, and do follow-ups in other PRs?","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,467771005
https://github.com/pydata/xarray/pull/3117#issuecomment-518074867,https://api.github.com/repos/pydata/xarray/issues/3117,518074867,MDEyOklzc3VlQ29tbWVudDUxODA3NDg2Nw==,1217238,2019-08-05T03:43:22Z,2019-08-05T03:43:22Z,MEMBER,"> At the moment, the behavior of Variables and DataArrays is such that `.data` provides the duck array and `.values` coerces to numpy, following the original behavior for dask arrays -- which made me realize, we never asked if this behavior is desired in general?

I think the right behavior is probably for `.values` to be implemented by calling `np.asarray()` on `.data`. That means it should raise on sparse arrays.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,467771005
https://github.com/pydata/xarray/pull/3117#issuecomment-517400198,https://api.github.com/repos/pydata/xarray/issues/3117,517400198,MDEyOklzc3VlQ29tbWVudDUxNzQwMDE5OA==,1217238,2019-08-01T18:14:38Z,2019-08-01T18:14:38Z,MEMBER,"> 2\. Operations not supported by the duck type. This happens in a few cases with pydata/sparse, and would have to be solved upstream, unless it's a special case where it might be okay to coerce. e.g. what happens with binary operations that mix array types?

This is totally fine for now, as long as there are clear errors when attempting to do an unsupported operation. We can write unit tests with expected failures, which should provide a clear roadmap for things to fix upstream in sparse.

We could attempt to define a minimum required implementation, but in practice I suspect this will be hard to nail down definitively. The ultimate determinant of what works will be xarray's implementation.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,467771005
https://github.com/pydata/xarray/pull/3117#issuecomment-512583158,https://api.github.com/repos/pydata/xarray/issues/3117,512583158,MDEyOklzc3VlQ29tbWVudDUxMjU4MzE1OA==,1217238,2019-07-17T21:53:50Z,2019-07-17T21:53:50Z,MEMBER,"> Would it make sense to just assume that all non-DataArray NEP-18 compliant arrays do not contain an xarray-compliant `coords` attribute?

Yes, let's switch:
```
coords = getattr(data, 'coords', None) 
```
to
```
if isinstance(data, DataArray):
    coords = data.coords
```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,467771005
https://github.com/pydata/xarray/pull/3117#issuecomment-511180633,https://api.github.com/repos/pydata/xarray/issues/3117,511180633,MDEyOklzc3VlQ29tbWVudDUxMTE4MDYzMw==,1217238,2019-07-14T07:35:45Z,2019-07-14T07:35:45Z,MEMBER,"> Even though it failed when I tried applying an operation on the dataset, this is still awesome!

Yes, it really is!

For this specific failure, we should think about adding an option for the default `skipna` value, or maybe making the semantics depend on the array type.

If someone is using xarray to wrap a computation oriented library like CuPy, they probably almost always want to set `skipna=False` (along with `join='exact'`). I don't think I've seen any deep library that has bothered to implement `nanmean`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,467771005