html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4610#issuecomment-1433726618,https://api.github.com/repos/pydata/xarray/issues/4610,1433726618,IC_kwDOAMm_X85VdO6a,35968931,2023-02-16T21:17:56Z,2023-02-16T21:17:56Z,MEMBER,"> it was triggering a load
Can we not just test the in-memory performance by `.load()`-ing first? Then worry about dask performance? That's what I was vaguely getting at in my comment, trying the in-memory performance but also plotting the dask graph.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-1433686861,https://api.github.com/repos/pydata/xarray/issues/4610,1433686861,IC_kwDOAMm_X85VdFNN,14371165,2023-02-16T20:39:54Z,2023-02-16T20:39:54Z,MEMBER,"Nice, I was looking at the real example too, `Temp_url = 'http://apdrc.soest.hawaii.edu:80/dods/public_data/WOA/WOA13/5_deg/annual/temp' etc..`, and it was triggering a load in set_dims:

","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-1433675446,https://api.github.com/repos/pydata/xarray/issues/4610,1433675446,IC_kwDOAMm_X85VdCa2,35968931,2023-02-16T20:29:25Z,2023-02-16T20:29:25Z,MEMBER,"> Could you show the example that's this slow, @TomNicholas ? So I can play around with it too.
I think I just timed the difference in the (unweighted) ""real"" example I gave in the notebook. (Not the weighted one because that didn't give the right answer with flox for some reason.)
> One thing I noticed in your notebook is that you haven't used chunks={} on the open_dataset. Which seems to trigger data loading on strange places in xarray (places that calls self.data), but I'm not sure this is your actual problem.
Fair point, worth trying.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-1433670641,https://api.github.com/repos/pydata/xarray/issues/4610,1433670641,IC_kwDOAMm_X85VdBPx,14371165,2023-02-16T20:24:51Z,2023-02-16T20:25:36Z,MEMBER,"> * Absolute speed of xhistogram appears to be 3-4x higher, and that's using `numpy_groupies` in flox. Possibly flox could be faster if using numba but not sure yet.
Could you show the example that's this slow, @TomNicholas ? So I can play around with it too.
One thing I noticed in your notebook is that you haven't used `chunks={}` on the open_dataset. Which seems to trigger data loading on strange places in xarray (places that calls self.data), but I'm not sure this is your actual problem.","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-1425198851,https://api.github.com/repos/pydata/xarray/issues/4610,1425198851,IC_kwDOAMm_X85U8s8D,2448579,2023-02-10T05:38:13Z,2023-02-10T05:38:31Z,MEMBER,"> Absolute speed of xhistogram appears to be 3-4x higher, and that's using numpy_groupies in flox. Possibly flox could be faster if using numba but not sure yet.
Nah, in my experience, the overhead is ""factorizing"" (pd.cut/np.digitize) or converting to integer bins, and then converting the nD problem to a 1D problem for bincount. numba doesn't really help.
-----
3-4x is a lot bigger than I expected. I was hoping for under 2x because flox is more general.
I think the problem is `pandas.cut` is a lot slower than `np.digitize`
We could swap that out easily here: https://github.com/xarray-contrib/flox/blob/daebc868c13dad74a55d74f3e5d24e0f6bbbc118/flox/core.py#L473
I think the one special case to consider is binning datetimes, and that digitize and pd.cut have different defaults for `side` or `closed`.
-----
> Dask graphs simplicity. Xhistogram literally uses blockwise, whereas the flox graphs IIUC are blockwise-like but actually a specially-constructed HLG right now. (
`blockwise` and `sum`.
Ideally`flox` would use a `reduction` that takes 2 array arguments (array to reduce, array to group by). Currently both [cubed](https://tom-e-white.com/cubed/operations.html#reduction-and-arg-reduction) and [dask](https://docs.dask.org/en/stable/generated/dask.array.reduction.html) onlt accept one argument.
As a workaround, we could replace `dask.array._tree_reduce` with `dask.array.reduction(chunk=lambda x: x, ...)` and then it would more or less all be public API that is common to dask and cubed.
> Flox has various clever schemes for making general chunked groupby operations run more efficiently, but I don't think histogramming would really benefit from those unless there is a strong pattern to which values likely fall in which bins, that is known a priori.
Yup. unlikely to help here.
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-1423144049,https://api.github.com/repos/pydata/xarray/issues/4610,1423144049,IC_kwDOAMm_X85U03Rx,35968931,2023-02-08T19:40:58Z,2023-02-08T20:25:04Z,MEMBER,"## Q: Use xhistogram approach or flox-powered approach?
@dcherian recently showed how his [flox package](https://github.com/xarray-contrib/flox) can perform histograms as groupby-like reductions. This begs the question of which approach would be better to use in a histogram function in xarray.
(This is related to but better than what we had [tried previously](https://github.com/xgcm/xhistogram/issues/60) with xarray groupby and numpy_groupies.)
**[Here's a WIP notebook comparing the two approaches.](https://gist.github.com/TomNicholas/9a2b6c97f8d19b60e9c685b76b9c76c6)**
Both approaches can feasibly do:
- Histograms which leave some dimensions excluded (broadcast over),
- Multi-dimensional histograms (e.g. binning two different variables into one 2D bin),
- Normalized histograms (return PDFs instead of counts),
- Weighted histograms,
- Multi-dimensional bins (as @aaronspring asks for above - but it requires work - see [how to do it flox](https://github.com/xgcm/xhistogram/issues/28#issuecomment-1386490718), and [my stalled PR to xhistogram](https://github.com/xgcm/xhistogram/pull/59)).
Pros of using flox-powered reductions:
- Much less code - the flox approach is basically one call to flox.
- Fewer codepaths, with groupby logic and all histogram functionality flowing through the `flox.xarray_reduce` codepath.
- Likely clearer code than the kinda impenetrable reshaped bincount logic lurking in the depths of xhistogram.
- Supporting new features (e.g. multidimensional bins) should be simpler in flox because the options don't have to be propagated all the way down to the level of the `np.bincount` caller.
Pros of using xhistogram's blockwise bincount approach:
- Absolute speed of xhistogram appears to be 3-4x higher, and that's using `numpy_groupies` in flox. Possibly flox could be faster if using numba but not sure yet.
- Dask graphs simplicity. Xhistogram literally uses `blockwise`, whereas the flox graphs IIUC are blockwise-like but actually a specially-constructed HLG right now. (Also important for supporting other parallel backends.) I suspect that in practice both perform similarly well after graph optimization but I have not tested this at scale, and flox's graph might be more sensitive to extra steps in the calculation like adding weights or normalising the result.
Other thoughts:
- Flox has various clever schemes for making general chunked groupby operations run more efficiently, but I don't think histogramming would really benefit from those unless there is a strong pattern to which values likely fall in which bins, that is known a priori.
- Deepak's example using flox uses `pandas.IntervalIndex` to represent the bins on the result object, whereas xhistogram just returns the mid-points of the bins, throwing that info away. This seems like a cool idea on it's own, but probably requires some extra work to make sure it's handled by the indexes refactor and the plotting code.
- In my comparison notebook here's something I'm missing that's causing my ""real example"" (from xhistogram docs) to not actually use the provided weights. I suspect its something simple, any idea @dcherian?
xref https://github.com/xgcm/xhistogram/issues/60, https://github.com/xgcm/xhistogram/issues/28
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-858693187,https://api.github.com/repos/pydata/xarray/issues/4610,858693187,MDEyOklzc3VlQ29tbWVudDg1ODY5MzE4Nw==,35968931,2021-06-10T14:54:31Z,2021-06-10T14:54:31Z,MEMBER,"> We may want to also reimplement it using numpy_groupies, which I think is smarter than our implementation in xhistogram.
Given the performance I found in https://github.com/xgcm/xhistogram/issues/60, I think we probably want to use the `dask.blockwise` approach instead of the `numpy_groupies` one.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-853143060,https://api.github.com/repos/pydata/xarray/issues/4610,853143060,MDEyOklzc3VlQ29tbWVudDg1MzE0MzA2MA==,35968931,2021-06-02T15:51:28Z,2021-06-02T15:51:36Z,MEMBER,"Okay great, thanks for the patient explanation @aaronspring ! Will tag you when this has progressed to the point that you can try it out.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-853123125,https://api.github.com/repos/pydata/xarray/issues/4610,853123125,MDEyOklzc3VlQ29tbWVudDg1MzEyMzEyNQ==,35968931,2021-06-02T15:26:06Z,2021-06-02T15:26:06Z,MEMBER,"> my point about the bins is that if the inputs are two xr.datasets, then also the bins should be two xr.datasets.
This makes sense, but it sounds like this suggestion (of accepting Datasets not just DataArrays) is mostly a convenience tool for applying histograms to particular variables across multiple datasets quickly. It's not fundamentally different to picking and choosing the variables you want from multiple datasets and feeding them in to `histogram` as dataarrays. That kind of selecting and organising pre-calculation is in my opinion problem-specific and something that's more appropriate in the library that calls xarray (i.e. xskillscore).
I think we should focus on including features that enable analyses that would otherwise be difficult or impossible, for example ND bins: without allowing bins to be >1D at a low level internally then it would be fairly difficult to replicate the same functionality just by wrapping `histogram`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-852369352,https://api.github.com/repos/pydata/xarray/issues/4610,852369352,MDEyOklzc3VlQ29tbWVudDg1MjM2OTM1Mg==,35968931,2021-06-01T18:59:37Z,2021-06-01T18:59:37Z,MEMBER,"For each dataset in what? Do you mean for each input dataarray? I'm proposing an API in which you either pass multiple DataArrays as data (what xhistogram currently accepts), or you can call `.histogram()` as a method on a single dataset, equivalent to passing in all the `data_vars` of that one dataset. What would passing multiple datasets to `histogram()` allow you to do?
> If bins is only a dataArray, I cannot have this. Can I?
If bins can be a list of multiple dataarrays then you can have this, right? i.e.
```python
histogram(da1, da2, bins=[bins_for_da1, bins_for_da2])
```
where `bins_for_da1` is itself an ND `xr.DataArray`. With that you can have different bins for different data variables, as well as multidimensional bins that can vary over time/quantile etc.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-852257864,https://api.github.com/repos/pydata/xarray/issues/4610,852257864,MDEyOklzc3VlQ29tbWVudDg1MjI1Nzg2NA==,35968931,2021-06-01T16:21:39Z,2021-06-01T16:21:53Z,MEMBER,"> I cannot find this in #5400. I should checkout and run the code locally.
#5400 is right now just a skeleton, it won't compute anything other than a `NotImplementedError`.
> defining bin edges based on quantiles of the climatology
One of the bullets above is for N-dimensional bins, passed as xr.DataArrays. If we allow multidimensional xr.DataArrays as bins, then you could pass bins which changed at each quantile in that way.
What I'm unclear about is what you want to achieve by inputting an xarray.Dataset that couldn't be done with inputs of ND xr.DataArrays as both data and bins?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-852174014,https://api.github.com/repos/pydata/xarray/issues/4610,852174014,MDEyOklzc3VlQ29tbWVudDg1MjE3NDAxNA==,35968931,2021-06-01T14:31:01Z,2021-06-01T14:31:01Z,MEMBER,"@aaronspring I'm a bit confused by your comment.
The (proposed) API in #5400 [does have](https://github.com/pydata/xarray/pull/5400/files?file-filters%5B%5D=.py#diff-763e3002fd954d544b05858d8d138b828b66b6a2a0ae3cd58d2040a652f14638R7631) a `Dataset.hist()` method, but it would just create an N-D histogram of the N variables in the dataset. The idea being that if I had only loaded the variables of interest I could immediately make the histogram of interest, e.g.
```python
ds = open(file, vars=['temperature', 'salinity'])
ds.hist() # creates 2D temperature-salinity histogram
```
That's not the same thing as using Datasets as bins though - but I'm not really sure I understand the use case for that or what that allows? You can already choose different bins to use for each input variable, are you saying it would be neater if you could assign bins to input variables via a dict-like dataset rather than the arguments being in the corresponding positions in a list?
The example you linked doesn't pass datasets as bins either, it just loops over multiple input datasets and assumes you want to calculate joint histograms between those datasets.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-844322268,https://api.github.com/repos/pydata/xarray/issues/4610,844322268,MDEyOklzc3VlQ29tbWVudDg0NDMyMjI2OA==,35968931,2021-05-19T17:37:49Z,2021-05-28T14:07:25Z,MEMBER,"Update on this: in [a PR to xhistogram](https://github.com/xgcm/xhistogram/pull/49) we have a rough proof-of-principle for a dask-parallelized, axis-aware implementation of N-dimensional histogram calculations, suitable for eventually integrating into xarray.
We still need to complete the work over in xhistogram, but for now I want to suggest what I think the eventual API should be for this functionality within xarray:
### Top-level function
xhistogram's xarray API is essentially one `histogram` function, which accepts one or more xarray DataArrays. Therefore we it makes sense to add `histogram` or `hist` as a top-level function, similar to the existing `cov`, `corr`, `dot` or `polyval`.
### New methods
We could also add a `datarray.hist()` method for 1-D histograms and possibly a `dataset.hist(vars=['density', 'temperature'])` for quickly making N-D histograms.
### The existing `plot.hist` method (the tricky bit)
There is already a `da.plot.hist()` method, which is a paper-thin wrapper around `matplotlib.pyplot.hist`, which flattens the dataarray before plotting the result. It would be nice if this internally dispatched to the new `da.hist()` method before plotting the result, but `pyplot.hist` does both the bincounting and the plotting, so it might not be simple to do that.
This is also potentially related to @dcherian 's [PR for facets and hue with hist](https://github.com/pydata/xarray/pull/4868), in that a totally consistent treatment would use the axis-aware histogram algorithm to calculate each separate facet histogram by looping over non-binned dimensions. Again the problem is that AFAIK matplotlib doesn't offer a quick way to plot a histogram without recomputing the bins and counts. Any suggestions here?
Adding an optional `dim` argument to `da.plot.hist()` doesn't really make sense unless we also add faceting, because otherwise the only shape of result that `da.plot.hist()` could actually plot is one where we have binned over the entire flattened array.
It would also be nice if
`da.hist().plot.hist()`
was identical to
`da.plot.hist()`
which requires the format of the output of `da.hist()` to be compatible with `da.plot.hist()`.
(We shouldn't need any kind of new `da.plot.hist2d()` method because as [the xhistogram docs show](https://xhistogram.readthedocs.io/en/latest/tutorial.html) you can already make very nice 2D histogram plots with `da.plot()`.)
### Signature
xhistogram adds bin coordinates (the bin centers) to the output dataarray, named after the quantities that were binned.
Following what we currently have, a minimal top-level function signature looks like
```python
def hist(*datarrays, bins=None, dim=None, weights=None, density=False):
""""""
Histogram applied along specified dimensions.
If any of the supplied arguments are dask arrays it will use `dask.array.blockwise`
internally to parallelize over all chunks.
datarrays : xarray.DataArray objects
Input data. The number of input arguments determines the dimensionality of
the histogram. For example, two arguments prodoce a 2D histogram.
bins : int or array_like or a list of ints or arrays, or list of DataArrays, optional
If a list, there should be one entry for each item in ``args``.
The bin specification:
* If int, the number of bins for all arguments in ``args``.
* If array_like, the bin edges for all arguments in ``args``.
* If a list of ints, the number of bins for every argument in ``args``.
* If a list arrays, the bin edges for each argument in ``args``
(required format for Dask inputs).
* A combination [int, array] or [array, int], where int
is the number of bins and array is the bin edges.
* If a list of DataArrays, the bins for each argument in ``args``
The DataArrays can be multidimensional, but must not have any
dimensions shared with the `dim` argument.
When bin edges are specified, all but the last (righthand-most) bin include
the left edge and exclude the right edge. The last bin includes both edges.
A ``TypeError`` will be raised if ``args`` contains dask arrays and
``bins`` are not specified explicitly as a list of arrays.
dim : tuple of strings, optional
Dimensions over which which the histogram is computed. The default is to
compute the histogram of the flattened array.
weights : array_like, optional
An array of weights, of the same shape as `a`. Each value in
`a` only contributes its associated weight towards the bin count
(instead of 1). If `density` is True, the weights are
normalized, so that the integral of the density over the range
remains 1. NaNs in the weights input will fill the entire bin with
NaNs. If there are NaNs in the weights input call ``.fillna(0.)``
before running ``histogram()``.
density : bool, optional
If ``False``, the result will contain the number of samples in
each bin. If ``True``, the result is the value of the
probability *density* function at the bin, normalized such that
the *integral* over the range is 1. Note that the sum of the
histogram values will not be equal to 1 unless bins of unity
width are chosen; it is not a probability *mass* function.
""""""
```
Weights could also possibly be set via the `.weighted()` method that [we already have for other operations](http://xarray.pydata.org/en/stable/user-guide/computation.html?highlight=weighted#weighted-array-reductions).
### Checklist
Desired features in order to fully deprecate xhistogram:
- [ ] axis-aware (ability to loop over dimensions instead of binning over them)
- [ ] optional dask parallelization across all dimensions
- [ ] weights
- [ ] ~~accept dask-aware bin arrays?~~
- [ ] accept multi-dimensional bins arguments? (see https://github.com/xgcm/xhistogram/issues/28)
- any others?
cc @dougiesquire @gjoseph92","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,750985364
https://github.com/pydata/xarray/issues/4610#issuecomment-846418243,https://api.github.com/repos/pydata/xarray/issues/4610,846418243,MDEyOklzc3VlQ29tbWVudDg0NjQxODI0Mw==,14371165,2021-05-22T14:46:13Z,2021-05-22T14:46:13Z,MEMBER,"> but `pyplot.hist` does both the bincounting and the plotting, so it might not be simple to do that.
Should be fine I think. Matplolib explains how to use `np.histogram`-like results in the weights-parameter: https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.hist.html
```python
counts, bins = np.histogram(data)
plt.hist(bins[:-1], bins, weights=counts)
```
Some reading if wanting to do the plot by hand:
https://stackoverflow.com/questions/5328556/histogram-matplotlib
https://stackoverflow.com/questions/33203645/how-to-plot-a-histogram-using-matplotlib-in-python-with-a-list-of-data","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,750985364