html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/2070#issuecomment-668287739,https://api.github.com/repos/pydata/xarray/issues/2070,668287739,MDEyOklzc3VlQ29tbWVudDY2ODI4NzczOQ==,244887,2020-08-03T23:22:53Z,2020-08-03T23:22:53Z,CONTRIBUTOR,Hi @dnowacki-usgs. Feel free to take all the credit! I have ended up swamped at work the last few months and just haven't had the time to get back into this yet. I am guessing easiest way to move forward would be to fork my last commit and open up a new pull request.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,316461072 https://github.com/pydata/xarray/pull/2070#issuecomment-586034963,https://api.github.com/repos/pydata/xarray/issues/2070,586034963,MDEyOklzc3VlQ29tbWVudDU4NjAzNDk2Mw==,244887,2020-02-14T00:13:15Z,2020-02-14T00:13:15Z,CONTRIBUTOR,Oh my... It's been some time since I have logged in to github... I will have to take a look at this over the weekend to see if I can remember where I was on this PR,"{""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,316461072 https://github.com/pydata/xarray/issues/2049#issuecomment-383267324,https://api.github.com/repos/pydata/xarray/issues/2049,383267324,MDEyOklzc3VlQ29tbWVudDM4MzI2NzMyNA==,244887,2018-04-21T04:39:29Z,2018-04-21T04:39:29Z,CONTRIBUTOR,"Above PR is a first draft. It would seem that the kwargs for the dask array method are a subset of the numpy array method, so I based docstring on these. Happy to do something else though if that makes sense.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,313010564 https://github.com/pydata/xarray/issues/2049#issuecomment-381651747,https://api.github.com/repos/pydata/xarray/issues/2049,381651747,MDEyOklzc3VlQ29tbWVudDM4MTY1MTc0Nw==,244887,2018-04-16T15:46:00Z,2018-04-16T15:47:18Z,CONTRIBUTOR,"I have a version of this working, but to get tests to pass I had to add the same behavior for `Variable` types (as the method was no longer being added from ` NUMPY_UNARY_METHODS`). I don't think I have a very good picture of the proper use of `Variables` in the internal api, so I wasn't sure if it made sense to extend the behavior therein. Also I should say that on the first pass I had to do this outside of the `apply_ufunc` mechanism, as `apply_ufunc` doesn't keep `attrs` for `Variable`s (thus inspiring my question above). Just let me know if that makes sense or what alternative path seems best and I'll see if I can open a PR.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,313010564 https://github.com/pydata/xarray/issues/1882#issuecomment-365697240,https://api.github.com/repos/pydata/xarray/issues/1882,365697240,MDEyOklzc3VlQ29tbWVudDM2NTY5NzI0MA==,244887,2018-02-14T18:17:53Z,2018-02-14T18:17:53Z,CONTRIBUTOR,"> Xarray for Scalable Scientific Data Analysis Nice title! I know xarray has its origins and most of its current users in the earth science domains, and so I would expect much of the core of an xarray tutorial to involve various geo* flavored data, but since SciPy has attendees from so many different backgrounds it could be useful to try to survey the scope of work being done with xarray right now. I imagine there must be other users in astronomy, physics, biology and perhaps even quantitative civics/demography that could have interesting snippets to share. For my part, I am using xarray to work with microscopy data in a biological context, and would be happy to share a snippet or two. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,293913247 https://github.com/pydata/xarray/issues/1388#issuecomment-363072151,https://api.github.com/repos/pydata/xarray/issues/1388,363072151,MDEyOklzc3VlQ29tbWVudDM2MzA3MjE1MQ==,244887,2018-02-05T12:35:10Z,2018-02-05T12:35:10Z,CONTRIBUTOR,"@fujiisoup and @shoyer Really enlightening comments above. I think I am starting to get the dao of xarray a bit better :) >I was thinking whether such aggregation methods (including argmin) should propagate the coordinate. Agreed it would be nice to have a consistent and well reasoned rule for coordinate propagation in aggregation methods. I think a key point here, which gets brought up in your example is that it might make sense to have different subrules depending on the semantics of the operation. Functions like `argmax` are explicitly concerned with underlying ""indices"" (dimensions or otherwise) and so may call for different behavior from the `mean`, which explicitly is explicitly invariant under permutations of the underlying indices. The `max`/`min`/`median` functions are interesting case to think about, in that they are also invariant with under change of underlying indices, but can have potentially more than one index that they are associated with and do not ""destroy"" information about the value at those indices. >My concern with adding an additional dimension is that it is always a little surprising and error-prone when we invent new dimension names not supplied by the user (for example, this can lead to conflicting names) Yeah, I felt a little dirty appending '_argmax'. >I think my favorite option is (2) with da.argmin_indices() returning a Dataset, which will allow da[da.argmin_indices()] OK. I think I understand now why @fujiisoup proposed output a Dataset rather than an array. That's a natural syntax for getting the values from the indices. >Either way, I would like a separate dedicated method for returning multiple indexing arrays. +1 to dedicated adding more methods if needed, since I think even if it isn;t needed the associated docs will need to make sure users are aware of the analogous `idx*` methods if they get added. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,224878728 https://github.com/pydata/xarray/issues/1388#issuecomment-362717912,https://api.github.com/repos/pydata/xarray/issues/1388,362717912,MDEyOklzc3VlQ29tbWVudDM2MjcxNzkxMg==,244887,2018-02-02T21:50:05Z,2018-02-02T21:50:05Z,CONTRIBUTOR,"I just came across the various argmax/idxmax (and related min) related issues recently in a project I have been working on. In addition to agreeing that docs should be updated when appropriate here are my two or three cents: - As someone new to xarray I like the idea of having both argmax/argmin and argmax_indices/argmin_indices, with the former returning the coordinate indices and the latter the underlying numpy indices analogous to numpy.argmax/numpy.argmin methods. This makes migrating from numpy ndarrays data and collection of associated index arrays obvious (a common path into the xarray world I think). - I can also get that idxmax/idxmin might make a better name given that one can have multi-indexed coordinates. If both argmax and idxmax methods are retained probably good to have docs cross reference. - In any case, to respond to @fujiisoup's above proposal, I like the idea of retaining the dimension names in the output, and adding a dimension to hold argmax dims, but think it might make more sense to output a DataArray. By way of example, if I had something like: ```python size = (2,2,2,2) dims = list(""wxyz"") data = np.random.rand(*size) coords = {dim:[""{0}_{1}"".format(dim,s) for s in range(s)] for dim,s in zip(dims,size)} da = xr.DataArray(data, dims=dims, coords=coords) >>>da array([[[[ 0.149945, 0.230338], [ 0.626969, 0.299918]], [[ 0.351764, 0.286436], [ 0.130604, 0.982152]]], [[[ 0.262667, 0.950426], [ 0.76655 , 0.681631]], [[ 0.635468, 0.735071], [ 0.901116, 0.601303]]]]) Coordinates: * w (w) >>argmax(da) array(['w_0', 'x_1', 'y_1', 'z_1'], dtype='>>argmax(da, dim=list(""wy"")) array([[['w_1', 'y_1'], ['w_1', 'y_0']], [['w_1', 'y_1'], ['w_0', 'y_1']]], dtype=object) Coordinates: * x (x) object 'x_0' 'x_1' * z (z) object 'z_0' 'z_1' * argmaxdim (argmaxdim) Without coordinate values in between, what should xarray assume for the intermediate values? I guess I had imagined it would not try to plot those intermediate values. I think the behavior makes sense in 1d (pandas does the same linear interpolation I think) ``` xrYsum = xrAB.sum(dim = 'x') xrYsum.plot() ``` ![image](https://user-images.githubusercontent.com/244887/35489187-5e40eb94-0448-11e8-8068-f47415b2d833.png) but in 2d it seems weird. Is there are common use case where this interpolation is desirable? Perhaps its just my ignorance speaking, but it feels like the behavior violates the principle of least astonishment, especially in light of the fact that irregular grids are one of the main use cases for xarray with explicit support in the [docs]( http://xarray.pydata.org/en/stable/plotting.html#multidimensional-coordinates). >Probably the simplest way to fix this is to start with an all NaN array of the appropriate size. 👍 I'd be happy to take a stab at implementing plotting that gives (by default or through an optional argument) a result equivalent to the last plot you made if you imagine xarray users would find it useful. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,292054887 https://github.com/pydata/xarray/issues/1850#issuecomment-359961228,https://api.github.com/repos/pydata/xarray/issues/1850,359961228,MDEyOklzc3VlQ29tbWVudDM1OTk2MTIyOA==,244887,2018-01-23T23:01:11Z,2018-01-23T23:01:11Z,CONTRIBUTOR,"I don't have any strong opinion about separate repos or contrib submodules, so long as there is some way to improve discoverability of methods. Having said that, many of the methods mentioned in #1288 are in the numpy namespace, and at least naively applicable to all domains. Would you consider numpy methods with semantics compatible with DataArrays and/or Datasets as appropriate to contribute to core xarray? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,290593053 https://github.com/pydata/xarray/issues/1288#issuecomment-359521912,https://api.github.com/repos/pydata/xarray/issues/1288,359521912,MDEyOklzc3VlQ29tbWVudDM1OTUyMTkxMg==,244887,2018-01-22T18:38:50Z,2018-01-22T18:58:03Z,CONTRIBUTOR,">I've written wrappers for svd, fft, psd, gradient, and specgram, for starts @lamorton I really like the suggestion from @shoyer about submodules for throwing wrappers from other libraries, but in the meantime I think I might like very much to check out your implementation of `fft` and `gradient` in particular if these are somewhere public. I have been hacking at at least the latter and other functions in the numpy/scipy scope.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,210704949