html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1192#issuecomment-459774086,https://api.github.com/repos/pydata/xarray/issues/1192,459774086,MDEyOklzc3VlQ29tbWVudDQ1OTc3NDA4Ng==,2448579,2019-02-01T16:07:52Z,2019-02-01T16:07:52Z,MEMBER,Looks like it,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-459771960,https://api.github.com/repos/pydata/xarray/issues/1192,459771960,MDEyOklzc3VlQ29tbWVudDQ1OTc3MTk2MA==,2443309,2019-02-01T16:01:59Z,2019-02-01T16:01:59Z,MEMBER,Should this have been closed by #2612?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-434478590,https://api.github.com/repos/pydata/xarray/issues/1192,434478590,MDEyOklzc3VlQ29tbWVudDQzNDQ3ODU5MA==,1217238,2018-10-30T21:34:54Z,2018-10-30T21:34:54Z,MEMBER,"see also https://github.com/pydata/xarray/issues/2525 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-433510805,https://api.github.com/repos/pydata/xarray/issues/1192,433510805,MDEyOklzc3VlQ29tbWVudDQzMzUxMDgwNQ==,14314623,2018-10-26T18:59:07Z,2018-10-26T18:59:07Z,CONTRIBUTOR,"I should add that I would be happy to work on an implementation, but probably need a good amount of pointers. Here is the implementation that I have been using (only works with dask.arrays at this point). Should have posted that earlier to avoid @rabernat s zingers over here. ```python def aggregate(da, blocks, func=np.nanmean, debug=False): """""" Performs efficient block averaging in one or multiple dimensions. Only works on regular grid dimensions. Parameters ---------- da : xarray DataArray (must be a dask array!) blocks : list List of tuples containing the dimension and interval to aggregate over func : function Aggregation function.Defaults to numpy.nanmean Returns ------- da_agg : xarray Data Aggregated array Examples -------- >>> from xarrayutils import aggregate >>> import numpy as np >>> import xarray as xr >>> import matplotlib.pyplot as plt >>> %matplotlib inline >>> import dask.array as da >>> x = np.arange(-10,10) >>> y = np.arange(-10,10) >>> xx,yy = np.meshgrid(x,y) >>> z = xx**2-yy**2 >>> a = xr.DataArray(da.from_array(z, chunks=(20, 20)), coords={'x':x,'y':y}, dims=['y','x']) >>> print a dask.array Coordinates: * y (y) int64 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 * x (x) int64 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 >>> blocks = [('x',2),('y',5)] >>> a_coarse = aggregate(a,blocks,func=np.mean) >>> print a_coarse dask.array Coordinates: * y (y) int64 -10 0 * x (x) int64 -10 -8 -6 -4 -2 0 2 4 6 8 Attributes: Coarsened with: Coarsenblocks: [('x', 2), ('y', 10)] """""" # Check if the input is a dask array (I might want to convert this # automaticlaly in the future) if not isinstance(da.data, Array): raise RuntimeError('data array data must be a dask array') # Check data type of blocks # TODO write test if (not all(isinstance(n[0], str) for n in blocks) or not all(isinstance(n[1], int) for n in blocks)): print('blocks input', str(blocks)) raise RuntimeError(""block dimension must be dtype(str), \ e.g. ('lon',4)"") # Check if the given array has the dimension specified in blocks try: block_dict = dict((da.get_axis_num(x), y) for x, y in blocks) except ValueError: raise RuntimeError(""'blocks' contains non matching dimension"") # Check the size of the excess in each aggregated axis blocks = [(a[0], a[1], da.shape[da.get_axis_num(a[0])] % a[1]) for a in blocks] # for now default to trimming the excess da_coarse = coarsen(func, da.data, block_dict, trim_excess=True) # for now default to only the dims new_coords = dict([]) # for cc in da.coords.keys(): warnings.warn(""WARNING: only dimensions are carried over as coordinates"") for cc in list(da.dims): new_coords[cc] = da.coords[cc] for dd in blocks: if dd[0] in list(da.coords[cc].dims): new_coords[cc] = \ new_coords[cc].isel( **{dd[0]: slice(0, -(1 + dd[2]), dd[1])}) attrs = {'Coarsened with': str(func), 'Coarsenblocks': str(blocks)} da_coarse = xr.DataArray(da_coarse, dims=da.dims, coords=new_coords, name=da.name, attrs=attrs) return da_coarse ``` ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-433509072,https://api.github.com/repos/pydata/xarray/issues/1192,433509072,MDEyOklzc3VlQ29tbWVudDQzMzUwOTA3Mg==,1197350,2018-10-26T18:53:05Z,2018-10-26T18:53:05Z,MEMBER,"Just to be clear, my comment above was a joke... @jbusecke and I are good friends! 🤣 ","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 1, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-433508754,https://api.github.com/repos/pydata/xarray/issues/1192,433508754,MDEyOklzc3VlQ29tbWVudDQzMzUwODc1NA==,1197350,2018-10-26T18:52:05Z,2018-10-26T18:52:05Z,MEMBER,https://twitter.com/wesmckinn/status/1055883061513084928,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-433160023,https://api.github.com/repos/pydata/xarray/issues/1192,433160023,MDEyOklzc3VlQ29tbWVudDQzMzE2MDAyMw==,14314623,2018-10-25T18:35:57Z,2018-10-25T18:35:57Z,CONTRIBUTOR,"Is this feature still being considered? A big +1 from me. I wrote my own function to achieve this (using dask.array.coarsen), but I was planning to implement a similar functionality in [xgcm](https://github.com/xgcm/xgcm/issues/103), and it would be ideal if we could use an upstream implementation from xarray. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305538498,https://api.github.com/repos/pydata/xarray/issues/1192,305538498,MDEyOklzc3VlQ29tbWVudDMwNTUzODQ5OA==,1217238,2017-06-01T15:57:31Z,2017-06-01T15:57:31Z,MEMBER,"The dask implementation is short enough that I would certainly reimplement/vendor the pure numpy version for xarray. It might also be worth considering using the related utility [`skimage.util.view_as_blocks`](http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_blocks) from skimage, which does the necessary reshaping using views (which should be much faster).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305209648,https://api.github.com/repos/pydata/xarray/issues/1192,305209648,MDEyOklzc3VlQ29tbWVudDMwNTIwOTY0OA==,1217238,2017-05-31T14:46:55Z,2017-05-31T14:46:55Z,MEMBER,"Currently dask is an optional dependency for carry, which I would like to preserve if possible. I'll take a glance at the implementation shortly, but my guess is that we will indeed want to vendor the numpy version into xarray. On Wed, May 31, 2017 at 6:38 AM Peter Steinberg wrote: > Hi @darothen earthio is a recent > experimental refactor of what was the elm.readers subpackage. elm - > Ensemble Learning Models was developed with a Phase I NASA SBIR in 2016 and > in part reflects our thinking in late 2015 when xarray was newer and we > were planning the proposal. In the last ca. month we have started a Phase > II of development on multi-model dask/xarray ML algorithms based on xarray, > dask, scikit-learn and a Bokeh maps UI for tasks like land cover > classification. I'll add you to elm and feel free to contact me at > psteinberg [at] continuum [dot] io. We will do more promotion / blogs in > the near term and also in about 12 months we will release a free/open > collection of notebooks that form a ""Machine Learning with Environmental > Data"" 3-day course. > > Back to the subject matter of the thread.... I assigned this issue to > myself. I'll wait to get started until after @shoyer > comments on @laliberte > 's question: > > (1) replicate serial coarsen into xarray or (2) point to dask coarsen > methods? > > — > > > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > , or mute > the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305189172,https://api.github.com/repos/pydata/xarray/issues/1192,305189172,MDEyOklzc3VlQ29tbWVudDMwNTE4OTE3Mg==,1445602,2017-05-31T13:38:40Z,2017-05-31T13:39:22Z,NONE,"Hi @darothen `earthio` is a recent experimental refactor of what was the `elm.readers` subpackage. `elm` - Ensemble Learning Models was developed with a Phase I NASA SBIR in 2016 and in part reflects our thinking in late 2015 when xarray was newer and we were planning the proposal. In the last ca. month we have started a Phase II of development on multi-model dask/xarray ML algorithms based on xarray, dask, scikit-learn and a Bokeh maps UI for tasks like land cover classification. I'll add you to `elm` and feel free to contact me at psteinberg [at] continuum [dot] io. We will do more promotion / blogs in the near term and also in about 12 months we will release a free/open collection of notebooks that form a ""Machine Learning with Environmental Data"" 3-day course. Back to the subject matter of the thread.... You can assign the issue to me (can you add me also to xarray repo so I can assign myself things?).. I'll wait to get started until after @shoyer comments on @laliberte 's question: > (1) replicate serial coarsen into xarray or (2) point to dask coarsen methods? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305178905,https://api.github.com/repos/pydata/xarray/issues/1192,305178905,MDEyOklzc3VlQ29tbWVudDMwNTE3ODkwNQ==,4992424,2017-05-31T12:59:52Z,2017-05-31T12:59:52Z,NONE,"Not to hijack the thread, but @PeterDSteinberg - this is the first I've heard of earthio and I think there would be a lot of interest from the broader atmospheric/oceanic sciences community to hear about what your all's plans are. Could your team do a blog post on Continuum sometime outlining the goals of the project?","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305176003,https://api.github.com/repos/pydata/xarray/issues/1192,305176003,MDEyOklzc3VlQ29tbWVudDMwNTE3NjAwMw==,3217406,2017-05-31T12:45:18Z,2017-05-31T12:45:18Z,CONTRIBUTOR,"The reason I ask is that, ideally, ``coarsen`` would work exactly the same with ``dask.array`` and ``np.ndarray`` data. By using both serial and parallel coarsen methods from ``dask``, we are adding a dependency but we are ensuring forward compatibility. @shoyer, what's your preference? (1) replicate serial coarsen into ``xarray`` or (2) point to ``dask`` coarsen methods?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305175143,https://api.github.com/repos/pydata/xarray/issues/1192,305175143,MDEyOklzc3VlQ29tbWVudDMwNTE3NTE0Mw==,306380,2017-05-31T12:36:05Z,2017-05-31T12:36:05Z,MEMBER,"My guess is that if you want to avoid a strong dependence on Dask then you'll want to copy the code over regardless. Historically chunk.py hasn't been considered public (we don't publish docstrings in the docs for example). That being said it hasn't moved in a long while and I don't see any reason for it to move. I'm certainly willing to commit to going through a lengthy deprecation cycle if it does need to move.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305169201,https://api.github.com/repos/pydata/xarray/issues/1192,305169201,MDEyOklzc3VlQ29tbWVudDMwNTE2OTIwMQ==,3217406,2017-05-31T12:00:11Z,2017-05-31T12:00:11Z,CONTRIBUTOR,If it's part of ``dask`` then it would be almost trivial to implement in ``xarray``. @mrocklin Can we assume that ``dask/array/chunk.py::coarsen`` is part of the public API?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305031755,https://api.github.com/repos/pydata/xarray/issues/1192,305031755,MDEyOklzc3VlQ29tbWVudDMwNTAzMTc1NQ==,306380,2017-05-30T22:55:21Z,2017-05-30T22:55:21Z,MEMBER,"> I would be happy with a coarsen method, though I'd prefer to have an non-dask implementation, too. Dask has this actually. We had to build it before we could build the parallel version. See `dask/array/chunk.py::coarsen`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-305028421,https://api.github.com/repos/pydata/xarray/issues/1192,305028421,MDEyOklzc3VlQ29tbWVudDMwNTAyODQyMQ==,1445602,2017-05-30T22:36:15Z,2017-05-30T22:36:15Z,NONE,"Hello @laliberte @shoyer @jhamman . I'm with Continuum and working on NASA funded Earth science ML (see [ensemble learning models in github](https://github.com/ContinuumIO/elm) and its documentation [here](http://ensemble-learning-models.readthedocs.io/en/latest/) as well as `earthio`, an experimental repo we have discussed simplifying and transitioning to Xarray - [earthio issue 12](https://github.com/ContinuumIO/earthio/issues/12) and [earthio issue 13](https://github.com/ContinuumIO/earthio/issues/13)). We (Continuum NASA, dask, and datashader team members) met with @rabernat this morning and discussed ideas for collaboration better with the Xarray team. I'll comment on more issues more regularly and make some experimental PRs over the next month. I'd like to keep most of the discussion on github issues so it is of general utility, but I'm happy to chat anytime if you want to talk further detail on longer term goals with Xarray. We can submit a PR on this issue for dask's coarsen and the specs above for using `block_reduce` in some situations. We have a variety of tasks we are covering now and in a planning / architecture phase for NASA. If we are too slow to respond to this issue, feel free to ping me.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-271752398,https://api.github.com/repos/pydata/xarray/issues/1192,271752398,MDEyOklzc3VlQ29tbWVudDI3MTc1MjM5OA==,2443309,2017-01-11T01:32:33Z,2017-01-11T01:32:33Z,MEMBER,I think this would be a nice feature and something that would fit nicely within xarray. The spatial resampling that I'm working towards is 1) a ways off and 2) quite a bit more domain specific than this. I'm +1!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-270439515,https://api.github.com/repos/pydata/xarray/issues/1192,270439515,MDEyOklzc3VlQ29tbWVudDI3MDQzOTUxNQ==,3217406,2017-01-04T17:59:08Z,2017-01-04T17:59:08Z,CONTRIBUTOR,"The ``dask`` implementation has the following API: ``dask.array.coarsen(reduction, x, axes, trim_excess=False)`` so a proposed ``xarray`` API could look like: ``xarray.coarsen(reduction, x, axes, chunks=None, trim_excess=False)``, resulting in the following implementation: 1. If the underlying data to ``x`` is ``dask.array``, yields x.chunks(chunks).array.coarsen(reduction, axes, trim_excess) 2. Else, copy the ``block_reduce`` function. Does that fit with the ``xarray`` API?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089 https://github.com/pydata/xarray/issues/1192#issuecomment-270427209,https://api.github.com/repos/pydata/xarray/issues/1192,270427209,MDEyOklzc3VlQ29tbWVudDI3MDQyNzIwOQ==,1217238,2017-01-04T17:12:04Z,2017-01-04T17:12:15Z,MEMBER,"This has the feel of a multi-dimensional resampling operation operation. I could potentially see this as part of that interface (e.g., `array.resample(x=3, y=3).mean()`), but that may be a slightly different operations because `resample`'s window size is in label rather than integer coordinates. That said, this seems useful and I wouldn't get too hung up about the optimal interface. I would be happy with a `coarsen` method, though I'd prefer to have an non-dask implementation, too. Potentially we could simply copy in skimage's [`block_reduce` function](http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.block_reduce) for the non-dask implementation, which if I recall correctly does some clever tricks with reshaping to do the calculation efficiently. cc @jhamman who has been thinking about regridding/resampling.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,198742089