home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

114 rows where user = 14314623 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

issue >30

  • enable internal plotting with cftime datetime 16
  • Dask friendly check in `.weighted()` 8
  • Non lazy behavior for weighted average when using resampled data 6
  • Option to skip tests in `weighted()` 5
  • Error when using .apply_ufunc with .groupby_bins 4
  • We need a fast path for open_mfdataset 4
  • add options for nondivergent and divergent cmap 4
  • Very slow coordinate assignment with dask array 4
  • add average function 3
  • Error when using engine='scipy' reading CM2.6 ocean output 3
  • Multi-dimensional binning/resampling/coarsening 3
  • Added docs example for `xarray.Dataset.get()` 3
  • apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 3
  • Fixing non-lazy behavior of sampled+weighted 3
  • Implementing dask.array.coarsen in xarrays 2
  • holoviews / bokeh doesn't like cftime coords 2
  • Change default colormaps 2
  • Achieving square aspect for Facetgrid heatmaps 2
  • [WIP] Feature: Animated 1D plots 2
  • Add support for cftime.datetime coordinates with coarsen 2
  • drop all but specified data_variables/coordinates as a convenience function 2
  • Problems plotting long model control runs with gregorian calendar 2
  • xr.merge bug? when using combine_attrs='drop_conflicts' 2
  • boundary conditions for differentiate() 2
  • Ordered Groupby Keys 1
  • Problem passing 'norm' when plotting a faceted figure 1
  • Bug in dateconversion? 1
  • CF conventions for time doesn't support years 1
  • Projection issue with plot.imshow and cartopy projection 1
  • Diagnose groupby/groupby_bins issues 1
  • …

user 1

  • jbusecke · 114 ✖

author_association 1

  • CONTRIBUTOR 114
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1233445643 https://github.com/pydata/xarray/issues/3937#issuecomment-1233445643 https://api.github.com/repos/pydata/xarray/issues/3937 IC_kwDOAMm_X85JhOML jbusecke 14314623 2022-08-31T21:36:51Z 2022-08-31T21:36:51Z CONTRIBUTOR

I am interested in the coarsen with weights scenario that @dcherian and @mathause described here for a current project of ours.

I solved the issue manually and its not that hard ```python import xarray as xr import numpy as np

example data with weights

data = np.arange(16).reshape(4,4).astype(float)

add some nans

data[2,2] = np.nan data[1,1] = np.nan

create some simple weights

weights = np.repeat(np.array([[1,2,1,3]]).T, 4, axis=1) weights

da = xr.DataArray(data, dims=['x', 'y'], coords={'w':(['x','y'], weights)}) da ```

python masked_weights = da.w.where(~np.isnan(da)) # .weighted() already knows how to do this da_weighted = da * masked_weights da_coarse = da_weighted.coarsen(x=2, y=2).sum() / masked_weights.coarsen(x=2, y=2).sum() da_coarse

but I feel all of this is duplicating existing functionality (e.g. the masking of weights based on nans in the data) and might be sensibly streamlined into something like: python da.weighted(da.w).coarsen(...).mean() at least from a user perspective (there might be unique challenges with the implementation that I am overlooking here).

Happy to help but would definitely need some guidance on this one.

I do believe that this would provide a very useful functionality for many folks who work with curvilinear grids and want to prototype things that depend on some sort of scale reduction (coarsening).

Also cc'ing @TomNicholas who is involved in the same project 🤗

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  compose weighted with groupby, coarsen, resample, rolling etc. 594669577
1199810912 https://github.com/pydata/xarray/issues/6850#issuecomment-1199810912 https://api.github.com/repos/pydata/xarray/issues/6850 IC_kwDOAMm_X85Hg6lg jbusecke 14314623 2022-07-29T18:02:52Z 2022-07-29T18:02:52Z CONTRIBUTOR

This should be fully reproducible in the pangeo cloud deployment, but is unfortunately only available as 'requester pays' for other local machines.=

How many variables are in ds. You're diagnosing the graph construction time in that %timeit statement. This will scale with number of variables.

Ahh good catch. @shanicetbailey can you try to drop all but two variables from the cloud dataset (ds[['SST', 'U']]) and check again?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Slow lazy performance on cloud data 1322491028
1109892189 https://github.com/pydata/xarray/issues/6493#issuecomment-1109892189 https://api.github.com/repos/pydata/xarray/issues/6493 IC_kwDOAMm_X85CJ5xd jbusecke 14314623 2022-04-26T14:48:33Z 2022-04-26T14:48:33Z CONTRIBUTOR

yes all of the grid methods (grid.diff etc) are now internally using grid_ufuncs. The axis methods are still going through the old code path, but will be deprecated soon! Please let us know how you get along with the new functionality, we are very curious for user feedback!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  boundary conditions for differentiate() 1206634329
1102925385 https://github.com/pydata/xarray/issues/6493#issuecomment-1102925385 https://api.github.com/repos/pydata/xarray/issues/6493 IC_kwDOAMm_X85BvU5J jbusecke 14314623 2022-04-19T17:49:55Z 2022-04-19T17:49:55Z CONTRIBUTOR

Hi @miniufo et al., just my two cents:

This is simpler and do not make heavy dependence of the third-party package like xgcm.

That is a fair point, but I think there is a counterpoint to be made, that xgcm gives you some more functionality (especially with the new grid_ufuncs feature) with regard to array padding. As you note, this is not needed for your particular setup, but if you use xgcm, you would get the same functionality + at a later point you might get padding on complex grid topologies for free down the line. So in the end this seems like a tradeoff between adding more dependencies vs flexibility and generalizability in the future.

I'll give a try with differentiate() and pad() to implement grad/div/vor... But some designs in xgcm also inspire me to make things much natural.

This makes me think that you really want xgcm, because these properties will naturally be located on staggered grid positions, even if your data is originally on a A grid. And once you start to try to handle these cases it would appear to me that you duplicate some of the functionality of xgcm?

I am still worried about the metrics concept introduced by xgcm. I think this should be discussed over xgcm's repo.

I second others here and think it would be great to elaborate on this on the xgcm issue tracker. But I also want to point out, that using the metrics functionality is entirely optional in xgcm, so if you desire, you can roll your own logic on top of grid.diff/interp etc.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  boundary conditions for differentiate() 1206634329
1069668665 https://github.com/pydata/xarray/issues/4470#issuecomment-1069668665 https://api.github.com/repos/pydata/xarray/issues/4470 IC_kwDOAMm_X84_wdk5 jbusecke 14314623 2022-03-16T21:48:21Z 2022-03-16T21:48:21Z CONTRIBUTOR

I am very interested in this sort of functionality as an xarray accessor. If I can help in any way, please let me know. Ideally this work would come in very handy to visualize Oxygen Minimum Zones in the global ocean as isosurfaces of a 3D oxygen array.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray / vtk integration 710357592
908389845 https://github.com/pydata/xarray/issues/5733#issuecomment-908389845 https://api.github.com/repos/pydata/xarray/issues/5733 IC_kwDOAMm_X842JO3V jbusecke 14314623 2021-08-30T14:27:01Z 2021-08-30T14:27:01Z CONTRIBUTOR

Strictly speaking, the values are different I guess. However I think this error would be clearer if it said that the dimension order was different but the values are equal once the dimensions are transposed.

I guess this comes down a bit to a philosophical question related to @benbovy s comment above. You can either make this operation be similar to the numpy equivalent (with some more xarray specific checks) or it can check whether the values at a certain combo of labels are the same/close.

The latter would be the way I think about data in xarray as a user. To me the removal of axis logic (via labels) is one of the biggest draws for myself, but importantly I also pitch this as one of the big reasons to switch to xarray for beginners.

I would argue that a 'strict' (numpy style) comparision is less practical in a scientific workflow and we do have the numpy implementation to achieve that functionality. So I would ultimately argue that xarray should check closeness between values at certain label positions by default.

However, this might be very opinionated on my end, and a better error message would already be a massive improvement.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Shoudn't `assert_allclose` transpose datasets? 977544678
890156051 https://github.com/pydata/xarray/issues/5649#issuecomment-890156051 https://api.github.com/repos/pydata/xarray/issues/5649 IC_kwDOAMm_X841DrQT jbusecke 14314623 2021-07-30T21:11:40Z 2021-07-30T21:11:40Z CONTRIBUTOR

That might be a bit too special to be included directly in xarray. Note, however, that instead of a name you can also pass a function to combine_attrs so you should be able to customize this.

👀

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.merge bug? when using combine_attrs='drop_conflicts' 956259734
889512092 https://github.com/pydata/xarray/issues/5649#issuecomment-889512092 https://api.github.com/repos/pydata/xarray/issues/5649 IC_kwDOAMm_X841BOCc jbusecke 14314623 2021-07-29T22:56:04Z 2021-07-29T22:56:04Z CONTRIBUTOR

Ideally, this: ```python ds1 = xr.Dataset(attrs={'a':[5]}) ds2 = xr.Dataset(attrs={'a':5})

xr.merge([ds1, ds2], combine_attrs='drop_conflicts') ```

would actually not be dropped but resolved to either 5 or [5]?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.merge bug? when using combine_attrs='drop_conflicts' 956259734
885086453 https://github.com/pydata/xarray/issues/5629#issuecomment-885086453 https://api.github.com/repos/pydata/xarray/issues/5629 IC_kwDOAMm_X840wVj1 jbusecke 14314623 2021-07-22T17:28:52Z 2021-07-22T17:28:52Z CONTRIBUTOR

Should I close this one then?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Polyfit performance on large datasets - Suboptimal dask task graph 950882492
876519164 https://github.com/pydata/xarray/issues/5581#issuecomment-876519164 https://api.github.com/repos/pydata/xarray/issues/5581 MDEyOklzc3VlQ29tbWVudDg3NjUxOTE2NA== jbusecke 14314623 2021-07-08T15:08:21Z 2021-07-08T15:08:21Z CONTRIBUTOR

I just stumbled over this in the cmip6_preprocessing CI. I would really appreciate a bugfix release. Cheers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Error slicing CFTimeIndex with Pandas 1.3 937508115
789972117 https://github.com/pydata/xarray/issues/2300#issuecomment-789972117 https://api.github.com/repos/pydata/xarray/issues/2300 MDEyOklzc3VlQ29tbWVudDc4OTk3MjExNw== jbusecke 14314623 2021-03-03T18:50:18Z 2021-03-03T18:50:18Z CONTRIBUTOR

the question is whether the chunk() method should delete existing chunks attributes from encoding.

IMO this is the user-friendly thing to do.

Just ran into this issue myself and just wanted to add a +1 to stripping the encoding when .chunk() is used.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  zarr and xarray chunking compatibility and `to_zarr` performance 342531772
775464667 https://github.com/pydata/xarray/issues/4084#issuecomment-775464667 https://api.github.com/repos/pydata/xarray/issues/4084 MDEyOklzc3VlQ29tbWVudDc3NTQ2NDY2Nw== jbusecke 14314623 2021-02-08T21:11:39Z 2021-02-08T21:11:39Z CONTRIBUTOR

I encountered a similar problem, which I could solve by dropping all scalar non-dim coords in my dataset (works in this particular workflow of mine, but is generally not ideal). Is the solution proposed by @chrisroat general enough to be implemented? Or is there another way to avoid this situation (besides dropping the coordintates?).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  write/read to zarr subtly changes array with non-dim coord 621451930
746816761 https://github.com/pydata/xarray/pull/4668#issuecomment-746816761 https://api.github.com/repos/pydata/xarray/issues/4668 MDEyOklzc3VlQ29tbWVudDc0NjgxNjc2MQ== jbusecke 14314623 2020-12-16T18:49:54Z 2020-12-16T18:49:54Z CONTRIBUTOR

Thanks for all the help! I made the entry to whats-new.rst.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixing non-lazy behavior of sampled+weighted 760375642
744572308 https://github.com/pydata/xarray/pull/4668#issuecomment-744572308 https://api.github.com/repos/pydata/xarray/issues/4668 MDEyOklzc3VlQ29tbWVudDc0NDU3MjMwOA== jbusecke 14314623 2020-12-14T16:56:19Z 2020-12-14T16:56:19Z CONTRIBUTOR

do you wan't to add a what's new entry? In any case we can merge this in a few days unless someone else has a comment.

Not sure. Would you think that this is significant enough? If yes, Id be happy to do it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixing non-lazy behavior of sampled+weighted 760375642
744504269 https://github.com/pydata/xarray/pull/4668#issuecomment-744504269 https://api.github.com/repos/pydata/xarray/issues/4668 MDEyOklzc3VlQ29tbWVudDc0NDUwNDI2OQ== jbusecke 14314623 2020-12-14T15:09:55Z 2020-12-14T15:09:55Z CONTRIBUTOR

Oops. Thanks for catching that. Should be fixed now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixing non-lazy behavior of sampled+weighted 760375642
741811314 https://github.com/pydata/xarray/issues/4625#issuecomment-741811314 https://api.github.com/repos/pydata/xarray/issues/4625 MDEyOklzc3VlQ29tbWVudDc0MTgxMTMxNA== jbusecke 14314623 2020-12-09T14:34:45Z 2020-12-09T14:34:45Z CONTRIBUTOR

As @dcherian pointed out above copy(..., deep=False) does fix this for all cases I am testing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Non lazy behavior for weighted average when using resampled data 753517739
741808410 https://github.com/pydata/xarray/issues/4625#issuecomment-741808410 https://api.github.com/repos/pydata/xarray/issues/4625 MDEyOklzc3VlQ29tbWVudDc0MTgwODQxMA== jbusecke 14314623 2020-12-09T14:30:57Z 2020-12-09T14:30:57Z CONTRIBUTOR

So I have added a test in #4668 and it confirms that this behavior is only occurring if the resample interval is smaller or equal than the chunks. If the resample interval is larger than the chunks it stays completely lazy...not sure if this is a general limitation? Does anyone have more insight into how resample handles this kind of workflow?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Non lazy behavior for weighted average when using resampled data 753517739
736880359 https://github.com/pydata/xarray/issues/4625#issuecomment-736880359 https://api.github.com/repos/pydata/xarray/issues/4625 MDEyOklzc3VlQ29tbWVudDczNjg4MDM1OQ== jbusecke 14314623 2020-12-01T23:15:19Z 2020-12-01T23:15:19Z CONTRIBUTOR

Oh I remember that too, and I didn't understand it at all...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Non lazy behavior for weighted average when using resampled data 753517739
736780341 https://github.com/pydata/xarray/issues/4635#issuecomment-736780341 https://api.github.com/repos/pydata/xarray/issues/4635 MDEyOklzc3VlQ29tbWVudDczNjc4MDM0MQ== jbusecke 14314623 2020-12-01T19:51:57Z 2020-12-01T19:51:57Z CONTRIBUTOR

That was it. Sorry for the false alarm! Closing this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unexpected error when using `weighted` 754558237
736563711 https://github.com/pydata/xarray/issues/4625#issuecomment-736563711 https://api.github.com/repos/pydata/xarray/issues/4625 MDEyOklzc3VlQ29tbWVudDczNjU2MzcxMQ== jbusecke 14314623 2020-12-01T13:50:21Z 2020-12-01T13:50:21Z CONTRIBUTOR

Do you have a suggestion how to test this? Should I write a test involving resample + weighted?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Non lazy behavior for weighted average when using resampled data 753517739
736147406 https://github.com/pydata/xarray/issues/4625#issuecomment-736147406 https://api.github.com/repos/pydata/xarray/issues/4625 MDEyOklzc3VlQ29tbWVudDczNjE0NzQwNg== jbusecke 14314623 2020-12-01T00:58:21Z 2020-12-01T00:58:21Z CONTRIBUTOR

Sweet. Ill try to apply this fix for my workflow now. Happy to submit a PR with the suggested changes to weighted.py too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Non lazy behavior for weighted average when using resampled data 753517739
736082255 https://github.com/pydata/xarray/issues/4625#issuecomment-736082255 https://api.github.com/repos/pydata/xarray/issues/4625 MDEyOklzc3VlQ29tbWVudDczNjA4MjI1NQ== jbusecke 14314623 2020-11-30T22:00:38Z 2020-11-30T22:00:38Z CONTRIBUTOR

Oh nooo. So would you suggest that in addition to #4559, we should have a kwarg to completely skip this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Non lazy behavior for weighted average when using resampled data 753517739
723197430 https://github.com/pydata/xarray/pull/4559#issuecomment-723197430 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMzE5NzQzMA== jbusecke 14314623 2020-11-06T17:15:33Z 2020-11-06T17:15:33Z CONTRIBUTOR

Seems like all the other test are passing (minus the two upstream problems discussed before).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
723168106 https://github.com/pydata/xarray/pull/4559#issuecomment-723168106 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMzE2ODEwNg== jbusecke 14314623 2020-11-06T16:20:04Z 2020-11-06T16:20:04Z CONTRIBUTOR

I am not understanding why that MinimumVersionPolicy test is failing (or what it does haha). Is this something upstream, or should I fix?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
722513714 https://github.com/pydata/xarray/pull/4559#issuecomment-722513714 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMjUxMzcxNA== jbusecke 14314623 2020-11-05T17:12:06Z 2020-11-05T17:12:06Z CONTRIBUTOR

Ok I think this should be good to go. I have implemented all the requested changes. The remaining failures are related to other problems upstream (I think). Anything else I should add here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
721831543 https://github.com/pydata/xarray/pull/4559#issuecomment-721831543 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMTgzMTU0Mw== jbusecke 14314623 2020-11-04T16:22:43Z 2020-11-04T16:22:43Z CONTRIBUTOR

Similarly on MacOSX py38 this fails: TestDask.test_save_mfdataset_compute_false_roundtrip

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
721830832 https://github.com/pydata/xarray/pull/4559#issuecomment-721830832 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMTgzMDgzMg== jbusecke 14314623 2020-11-04T16:21:35Z 2020-11-04T16:21:35Z CONTRIBUTOR

I am getting some failures for py38-upstream (TestDataset.test_polyfit_warnings) that seem unrelated?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
721362748 https://github.com/pydata/xarray/pull/4559#issuecomment-721362748 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMTM2Mjc0OA== jbusecke 14314623 2020-11-03T20:39:31Z 2020-11-03T20:39:31Z CONTRIBUTOR

Do you think this works or are further changes needed? Many thanks for the guidance so far!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
720166781 https://github.com/pydata/xarray/pull/4559#issuecomment-720166781 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMDE2Njc4MQ== jbusecke 14314623 2020-11-01T23:11:51Z 2020-11-01T23:11:51Z CONTRIBUTOR

I did have to fiddle with this a bit. I did change .isnull() to np.isnan() and replace the data of weights, otherwise the error would not be raised for the dask array at all. This does not look terribly elegant to me, but passes tests locally. Waiting for the CI to see if the others pass aswell. Happy to make further changes, and thanks for all the input already.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
720020006 https://github.com/pydata/xarray/pull/4559#issuecomment-720020006 https://api.github.com/repos/pydata/xarray/issues/4559 MDEyOklzc3VlQ29tbWVudDcyMDAyMDAwNg== jbusecke 14314623 2020-11-01T03:16:06Z 2020-11-01T03:16:06Z CONTRIBUTOR

The Ci environments without dask are failing. Should I add some pytest skip logic, or what is the best way to handle this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dask friendly check in `.weighted()` 733789095
717625573 https://github.com/pydata/xarray/issues/4541#issuecomment-717625573 https://api.github.com/repos/pydata/xarray/issues/4541 MDEyOklzc3VlQ29tbWVudDcxNzYyNTU3Mw== jbusecke 14314623 2020-10-28T00:45:31Z 2020-10-28T00:45:31Z CONTRIBUTOR

Another option would be to put the check in a .map_blocks call for dask arrays. This would only run and raise at compute time.

Uh that sounds great actually. Same functionality, no triggered computation, and no intervention needed from the user. Should I try to implement this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Option to skip tests in `weighted()` 729980097
717266102 https://github.com/pydata/xarray/issues/4541#issuecomment-717266102 https://api.github.com/repos/pydata/xarray/issues/4541 MDEyOklzc3VlQ29tbWVudDcxNzI2NjEwMg== jbusecke 14314623 2020-10-27T14:03:34Z 2020-10-27T14:03:34Z CONTRIBUTOR

Thanks @mathause , I was wondering how much of a performance trade off .fillna(0) is on a dask array with no nans, compared to the check.

I favor this, since it allows slicing before the calculation is triggered: I have a current situation where I do a bunch of operations on a large multi-model dataset. The weights are time and member dependent and I am trying to save each member separately. Having the calculation triggered for the full dataset is problematic and fillna(0) avoids that (working with a hacked branch where I simply removed the check for nans)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Option to skip tests in `weighted()` 729980097
716974071 https://github.com/pydata/xarray/issues/4541#issuecomment-716974071 https://api.github.com/repos/pydata/xarray/issues/4541 MDEyOklzc3VlQ29tbWVudDcxNjk3NDA3MQ== jbusecke 14314623 2020-10-27T04:33:04Z 2020-10-27T04:33:04Z CONTRIBUTOR

Sounds good. I'll see if I can make some time to test and put up a PR this week.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Option to skip tests in `weighted()` 729980097
716930400 https://github.com/pydata/xarray/issues/4541#issuecomment-716930400 https://api.github.com/repos/pydata/xarray/issues/4541 MDEyOklzc3VlQ29tbWVudDcxNjkzMDQwMA== jbusecke 14314623 2020-10-27T02:06:35Z 2020-10-27T02:06:35Z CONTRIBUTOR

What would happen in this case if a dask array with nans is passed? Would this somehow silently influence the results or would it not matter (in that case I wonder what the check was for). If this could lead to undetected errors I would still consider a kwargs a safer alternative, especially for new users?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Option to skip tests in `weighted()` 729980097
716927242 https://github.com/pydata/xarray/issues/4541#issuecomment-716927242 https://api.github.com/repos/pydata/xarray/issues/4541 MDEyOklzc3VlQ29tbWVudDcxNjkyNzI0Mg== jbusecke 14314623 2020-10-27T01:56:28Z 2020-10-27T01:56:28Z CONTRIBUTOR

Sorry if my initial issue was unclear. So you favor not having a 'skip' kwarg to just internally skipping the call to .any() if weights is a dask array?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Option to skip tests in `weighted()` 729980097
704530619 https://github.com/pydata/xarray/issues/4112#issuecomment-704530619 https://api.github.com/repos/pydata/xarray/issues/4112 MDEyOklzc3VlQ29tbWVudDcwNDUzMDYxOQ== jbusecke 14314623 2020-10-06T20:20:34Z 2020-10-06T20:20:34Z CONTRIBUTOR

Just tried this with the newest dask version and can confirm that I do not get huge chunks anymore IF i specify dask.config.set({"array.slicing.split_large_chunks": True}). I also needed to modify the example to exceed the internal chunk size limitation: ```python import numpy as np import xarray as xr import dask dask.config.set({"array.slicing.split_large_chunks": True})

short_time = xr.cftime_range('2000', periods=12) long_time = xr.cftime_range('2000', periods=120)

data_short = np.random.rand(len(short_time)) data_long = np.random.rand(len(long_time)) n=1000 a = xr.DataArray(data_short, dims=['time'], coords={'time':short_time}).expand_dims(a=n, b=n).chunk({'time':3}) b = xr.DataArray(data_long, dims=['time'], coords={'time':long_time}).expand_dims(a=n, b=n).chunk({'time':3})

a,b = xr.align(a,b, join = 'outer') `` with the option turned on I get this fora`;

with the defaults, I still get one giant chunk.

Ill try this soon in a real world scenario described above. Just wanted to report back here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unexpected chunking behavior when using `xr.align` with `join='outer'` 627600168
697677955 https://github.com/pydata/xarray/issues/1845#issuecomment-697677955 https://api.github.com/repos/pydata/xarray/issues/1845 MDEyOklzc3VlQ29tbWVudDY5NzY3Nzk1NQ== jbusecke 14314623 2020-09-23T16:47:37Z 2020-09-23T16:47:37Z CONTRIBUTOR

Wondering if this is still an issue. I dont have the data to check it but in my experience these kind of operations have been much better in recent versions. Ill close this for now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  speed up opening multiple files with changing data variables 290084668
665810940 https://github.com/pydata/xarray/issues/3841#issuecomment-665810940 https://api.github.com/repos/pydata/xarray/issues/3841 MDEyOklzc3VlQ29tbWVudDY2NTgxMDk0MA== jbusecke 14314623 2020-07-29T17:55:22Z 2020-07-29T17:55:22Z CONTRIBUTOR

Closing in favor of SciTools/nc-time-axis#44

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems plotting long model control runs with gregorian calendar 577030502
595930317 https://github.com/pydata/xarray/issues/3841#issuecomment-595930317 https://api.github.com/repos/pydata/xarray/issues/3841 MDEyOklzc3VlQ29tbWVudDU5NTkzMDMxNw== jbusecke 14314623 2020-03-06T19:44:10Z 2020-03-06T19:44:10Z CONTRIBUTOR

Apologies for not noticing this earlier. I (wrongly) assumed that xarray would handle the axis limits. Should I raise an issue over there? I am currently quite busy so it might take a while for me to be able to work on a PR.

It looks like you can set xlim in da.plot.line - could you test this?

I tried this: ```

This needs a good amount of dask workers!

ds = data_dict['CMIP.CSIRO.ACCESS-ESM1-5.piControl.Omon.gn'] ts.plot(xlim=['0005', '2000']) ``` and am getting the same error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problems plotting long model control runs with gregorian calendar 577030502
567583795 https://github.com/pydata/xarray/issues/2867#issuecomment-567583795 https://api.github.com/repos/pydata/xarray/issues/2867 MDEyOklzc3VlQ29tbWVudDU2NzU4Mzc5NQ== jbusecke 14314623 2019-12-19T17:24:27Z 2019-12-19T17:24:27Z CONTRIBUTOR

I can confirm that this issue is resolved for my project. Seems to not make a difference in speed anymore whether I assign the dataarray as coordinate or data variable. Thanks for the fix!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Very slow coordinate assignment with dask array 429511994
566689340 https://github.com/pydata/xarray/issues/2867#issuecomment-566689340 https://api.github.com/repos/pydata/xarray/issues/2867 MDEyOklzc3VlQ29tbWVudDU2NjY4OTM0MA== jbusecke 14314623 2019-12-17T18:28:32Z 2019-12-17T18:28:32Z CONTRIBUTOR

I think this issue was actually a dupe. I remember you pointing me to changes in 14.x, that improved the performance, but I cant find the other issue right now. I will have an opportunity to test this in the coming days on some huge GFDL data

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Very slow coordinate assignment with dask array 429511994
566637471 https://github.com/pydata/xarray/issues/3574#issuecomment-566637471 https://api.github.com/repos/pydata/xarray/issues/3574 MDEyOklzc3VlQ29tbWVudDU2NjYzNzQ3MQ== jbusecke 14314623 2019-12-17T16:22:35Z 2019-12-17T16:22:35Z CONTRIBUTOR

I can give it a shot if you could point me to the appropriate place, since I have never messed with the dask internals of xarray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910
566636980 https://github.com/pydata/xarray/issues/2867#issuecomment-566636980 https://api.github.com/repos/pydata/xarray/issues/2867 MDEyOklzc3VlQ29tbWVudDU2NjYzNjk4MA== jbusecke 14314623 2019-12-17T16:21:23Z 2019-12-17T16:21:23Z CONTRIBUTOR

I believe this was fixed in a recent version. Closing

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Very slow coordinate assignment with dask array 429511994
565057853 https://github.com/pydata/xarray/issues/3574#issuecomment-565057853 https://api.github.com/repos/pydata/xarray/issues/3574 MDEyOklzc3VlQ29tbWVudDU2NTA1Nzg1Mw== jbusecke 14314623 2019-12-12T15:35:10Z 2019-12-12T15:35:10Z CONTRIBUTOR

This is the chunk setup

Might this be a problem resulting from numpy.vectorize?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910
564843368 https://github.com/pydata/xarray/issues/3574#issuecomment-564843368 https://api.github.com/repos/pydata/xarray/issues/3574 MDEyOklzc3VlQ29tbWVudDU2NDg0MzM2OA== jbusecke 14314623 2019-12-12T04:22:02Z 2019-12-12T05:32:14Z CONTRIBUTOR

I am having a similar problem. This impacts some of my frequently used code to compute correlations.

Here is a simplified example that used to work with older dependencies: ``` import xarray as xr import numpy as np from scipy.stats import linregress

def _ufunc(aa,bb): out = linregress(aa,bb) return np.array([out.slope, out.intercept])

def wrapper(a, b, dim='time'): return xr.apply_ufunc( _ufunc,a,b, input_core_dims=[[dim], [dim]], output_core_dims=[["parameter"]], vectorize=True, dask="parallelized", output_dtypes=[a.dtype], output_sizes={"parameter": 2},) ```

This works when passing numpy arrays:

a = xr.DataArray(np.random.rand(3, 13, 5), dims=['x', 'time', 'y']) b = xr.DataArray(np.random.rand(3, 5, 13), dims=['x','y', 'time']) wrapper(a,b)

<xarray.DataArray (x: 3, y: 5, parameter: 2)> array([[[ 0.09958247, 0.36831431], [-0.54445474, 0.66997513], [-0.22894182, 0.65433402], [ 0.38536482, 0.20656073], [ 0.25083224, 0.46955618]], [[-0.21684891, 0.55521932], [ 0.51621616, 0.20869272], [-0.1502755 , 0.55526262], [-0.25452988, 0.60823538], [-0.20571622, 0.56950115]], [[-0.22810421, 0.50423622], [ 0.33002345, 0.36121484], [ 0.37744774, 0.33081058], [-0.10825559, 0.53772493], [-0.12576656, 0.51722167]]]) Dimensions without coordinates: x, y, parameter

But when I convert both arrays to dask arrays, I get the same error as @smartass101.

wrapper(a.chunk({'x':2, 'time':-1}),b.chunk({'x':2, 'time':-1}))

--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-303b400356e2> in <module> 1 a = xr.DataArray(np.random.rand(3, 13, 5), dims=['x', 'time', 'y']) 2 b = xr.DataArray(np.random.rand(3, 5, 13), dims=['x','y', 'time']) ----> 3 wrapper(a.chunk({'x':2, 'time':-1}),b.chunk({'x':2, 'time':-1})) <ipython-input-1-4094fd485c95> in wrapper(a, b, dim) 16 dask="parallelized", 17 output_dtypes=[a.dtype], ---> 18 output_sizes={"parameter": 2},) ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/xarray/core/computation.py in apply_ufunc(func, input_core_dims, output_core_dims, exclude_dims, vectorize, join, dataset_join, dataset_fill_value, keep_attrs, kwargs, dask, output_dtypes, output_sizes, *args) 1042 join=join, 1043 exclude_dims=exclude_dims, -> 1044 keep_attrs=keep_attrs 1045 ) 1046 elif any(isinstance(a, Variable) for a in args): ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/xarray/core/computation.py in apply_dataarray_vfunc(func, signature, join, exclude_dims, keep_attrs, *args) 232 233 data_vars = [getattr(a, "variable", a) for a in args] --> 234 result_var = func(*data_vars) 235 236 if signature.num_outputs > 1: ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/xarray/core/computation.py in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, output_sizes, keep_attrs, *args) 601 "apply_ufunc: {}".format(dask) 602 ) --> 603 result_data = func(*input_data) 604 605 if signature.num_outputs == 1: ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/xarray/core/computation.py in func(*arrays) 591 signature, 592 output_dtypes, --> 593 output_sizes, 594 ) 595 ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/xarray/core/computation.py in _apply_blockwise(func, args, input_dims, output_dims, signature, output_dtypes, output_sizes) 721 dtype=dtype, 722 concatenate=True, --> 723 new_axes=output_sizes 724 ) 725 ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/dask/array/blockwise.py in blockwise(func, out_ind, name, token, dtype, adjust_chunks, new_axes, align_arrays, concatenate, meta, *args, **kwargs) 231 from .utils import compute_meta 232 --> 233 meta = compute_meta(func, dtype, *args[::2], **kwargs) 234 if meta is not None: 235 return Array(graph, out, chunks, meta=meta) ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/dask/array/utils.py in compute_meta(func, _dtype, *args, **kwargs) 119 # with np.vectorize, such as dask.array.routines._isnonzero_vec(). 120 if isinstance(func, np.vectorize): --> 121 meta = func(*args_meta) 122 else: 123 try: ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/numpy/lib/function_base.py in __call__(self, *args, **kwargs) 2089 vargs.extend([kwargs[_n] for _n in names]) 2090 -> 2091 return self._vectorize_call(func=func, args=vargs) 2092 2093 def _get_ufunc_and_otypes(self, func, args): ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/numpy/lib/function_base.py in _vectorize_call(self, func, args) 2155 """Vectorized call to `func` over positional `args`.""" 2156 if self.signature is not None: -> 2157 res = self._vectorize_call_with_signature(func, args) 2158 elif not args: 2159 res = func() ~/miniconda/envs/euc_dynamics/lib/python3.7/site-packages/numpy/lib/function_base.py in _vectorize_call_with_signature(self, func, args) 2229 for dims in output_core_dims 2230 for dim in dims): -> 2231 raise ValueError('cannot call `vectorize` with a signature ' 2232 'including new output dimensions on size 0 ' 2233 'inputs') ValueError: cannot call `vectorize` with a signature including new output dimensions on size 0 inputs

This used to work like a charm...I however was sloppy in testing this functionality (a good reminder always to write tests immediately 🙄 ), and I was not able to determine a combination of dependencies that would work. I am still experimenting and will report back

Could this behaviour be a bug introduced in dask at some point (as indicated by @smartass101 above)? cc'ing @dcherian @shoyer @mrocklin

EDIT: I can confirm that it seems to be a dask issue. If I restrict my dask version to <2.0, my tests (very similar to the above example) work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910
556062845 https://github.com/pydata/xarray/issues/757#issuecomment-556062845 https://api.github.com/repos/pydata/xarray/issues/757 MDEyOklzc3VlQ29tbWVudDU1NjA2Mjg0NQ== jbusecke 14314623 2019-11-20T15:45:33Z 2019-11-20T15:45:33Z CONTRIBUTOR

Just stumbled across this issue. Is there a recommended workaround?

I am usually doing this (specific to seasons): import xarray as xr ds = xr.tutorial.open_dataset('air_temperature') airtemp_seasonal = ds.groupby('time.season').mean('time').sortby(xr.DataArray(['DJF','MAM','JJA', 'SON'],dims=['season'])) Thought this might help some folks who need to solve this problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ordered Groupby Keys 132774456
547467154 https://github.com/pydata/xarray/issues/3454#issuecomment-547467154 https://api.github.com/repos/pydata/xarray/issues/3454 MDEyOklzc3VlQ29tbWVudDU0NzQ2NzE1NA== jbusecke 14314623 2019-10-29T15:05:17Z 2019-10-29T15:05:17Z CONTRIBUTOR

You guys are the best! Thanks.

Julius J.M. Busecke, Ph.D. (he/him)

———————————————

Postdoctoral Research Associate

Princeton University • Geosciences

408A Guyot Hall • Princeton NJ

juliusbusecke.com

On Oct 29, 2019, at 10:47 AM, Deepak Cherian notifications@github.com wrote:



Totally fixed by #3453https://github.com/pydata/xarray/pull/3453!!! Both statements take the same time on that branch.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/pydata/xarray/issues/3454?email_source=notifications&email_token=ADNGY72OB7JSHN4OALG7RALQRBEGXA5CNFSM4JGIQ3AKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECQYRBQ#issuecomment-547457158, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADNGY7ZRON5TWT4GMB4HEHLQRBEGXANCNFSM4JGIQ3AA.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Large coordinate arrays trigger computation 513916063
545002080 https://github.com/pydata/xarray/issues/1764#issuecomment-545002080 https://api.github.com/repos/pydata/xarray/issues/1764 MDEyOklzc3VlQ29tbWVudDU0NTAwMjA4MA== jbusecke 14314623 2019-10-22T14:53:30Z 2019-10-22T14:53:30Z CONTRIBUTOR

I think this was closed via #3338. Closing

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  .groupby_bins fails when data is not contained in bins 279883145
541025099 https://github.com/pydata/xarray/issues/3377#issuecomment-541025099 https://api.github.com/repos/pydata/xarray/issues/3377 MDEyOklzc3VlQ29tbWVudDU0MTAyNTA5OQ== jbusecke 14314623 2019-10-11T11:25:26Z 2019-10-11T11:25:26Z CONTRIBUTOR

Glad that this orphaned test (we ended up removing it, because the function was not called anymore) was still useful!

And many thanks to @dcherian for suggesting to test xgxm with the upstream master!

<sub>Sent with GitHawk</sub>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Changed behavior for replacing coordinates on dataset. 503562032
537174078 https://github.com/pydata/xarray/pull/3362#issuecomment-537174078 https://api.github.com/repos/pydata/xarray/issues/3362 MDEyOklzc3VlQ29tbWVudDUzNzE3NDA3OA== jbusecke 14314623 2019-10-01T18:43:03Z 2019-10-01T18:43:03Z CONTRIBUTOR

Thanks for this quick implementation @dcherian. I will work on implementing testing with the xarray master downstream so we can catch these earlier.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix concat bug when concatenating unlabeled dimensions. 501059947
531945252 https://github.com/pydata/xarray/issues/1823#issuecomment-531945252 https://api.github.com/repos/pydata/xarray/issues/1823 MDEyOklzc3VlQ29tbWVudDUzMTk0NTI1Mg== jbusecke 14314623 2019-09-16T20:29:35Z 2019-09-16T20:29:35Z CONTRIBUTOR

Wooooow. Thanks. Ill have to give this a whirl soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  We need a fast path for open_mfdataset 288184220
495011402 https://github.com/pydata/xarray/issues/2982#issuecomment-495011402 https://api.github.com/repos/pydata/xarray/issues/2982 MDEyOklzc3VlQ29tbWVudDQ5NTAxMTQwMg== jbusecke 14314623 2019-05-22T23:28:41Z 2019-05-22T23:28:41Z CONTRIBUTOR

If I understand correctly then it gets piped through cmap_kwargs, which seems odd to me. Do you agree that this is a bug? Or am I missing a case where it would be preferable to pass extend directly to plot.contourf?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cbar_kwargs are ignored for `plot.contourf` 447361330
488782977 https://github.com/pydata/xarray/pull/2729#issuecomment-488782977 https://api.github.com/repos/pydata/xarray/issues/2729 MDEyOklzc3VlQ29tbWVudDQ4ODc4Mjk3Nw== jbusecke 14314623 2019-05-02T18:34:55Z 2019-05-02T18:34:55Z CONTRIBUTOR

Also FYI I have a PR open that will enable xmovie to write movie files (by invoking ffmpeg 'under the hood'). Just wanted to mention it since this might come in handy as another export option for this feature later on.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [WIP] Feature: Animated 1D plots 404945709
488780624 https://github.com/pydata/xarray/pull/2729#issuecomment-488780624 https://api.github.com/repos/pydata/xarray/issues/2729 MDEyOklzc3VlQ29tbWVudDQ4ODc4MDYyNA== jbusecke 14314623 2019-05-02T18:27:49Z 2019-05-02T18:27:49Z CONTRIBUTOR

This looks amazing! Which problem are you referring to specifically @rabernat?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [WIP] Feature: Animated 1D plots 404945709
483738697 https://github.com/pydata/xarray/issues/2900#issuecomment-483738697 https://api.github.com/repos/pydata/xarray/issues/2900 MDEyOklzc3VlQ29tbWVudDQ4MzczODY5Nw== jbusecke 14314623 2019-04-16T16:39:36Z 2019-04-16T16:39:36Z CONTRIBUTOR

Cool. Ill try to give that a try some time soon

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset with proprocess ds[var] 433833707
483460066 https://github.com/pydata/xarray/pull/2894#issuecomment-483460066 https://api.github.com/repos/pydata/xarray/issues/2894 MDEyOklzc3VlQ29tbWVudDQ4MzQ2MDA2Ng== jbusecke 14314623 2019-04-15T23:53:32Z 2019-04-15T23:53:32Z CONTRIBUTOR

How about this one?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added docs example for `xarray.Dataset.get()` 433410125
483458809 https://github.com/pydata/xarray/pull/2894#issuecomment-483458809 https://api.github.com/repos/pydata/xarray/issues/2894 MDEyOklzc3VlQ29tbWVudDQ4MzQ1ODgwOQ== jbusecke 14314623 2019-04-15T23:46:23Z 2019-04-15T23:46:49Z CONTRIBUTOR

I am still a bit puzzled over the ds[['x', 'temperature']] suggestion. Maybe I am not getting something, but x is a dimension and the output of ds[['x', 'temperature']] is the same as ds[['temperature']]. I think it would be clearer to add a third variable (maybe in the code block above) and then select two out of three data variables?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added docs example for `xarray.Dataset.get()` 433410125
483457364 https://github.com/pydata/xarray/pull/2894#issuecomment-483457364 https://api.github.com/repos/pydata/xarray/issues/2894 MDEyOklzc3VlQ29tbWVudDQ4MzQ1NzM2NA== jbusecke 14314623 2019-04-15T23:38:41Z 2019-04-15T23:38:41Z CONTRIBUTOR

Oh I see. It returns None if any of the keys is not found. That might indeed lead to confusion. So should I just add an example with multiple variables using ds[['var1', 'var2']]?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added docs example for `xarray.Dataset.get()` 433410125
483357765 https://github.com/pydata/xarray/issues/2884#issuecomment-483357765 https://api.github.com/repos/pydata/xarray/issues/2884 MDEyOklzc3VlQ29tbWVudDQ4MzM1Nzc2NQ== jbusecke 14314623 2019-04-15T18:04:34Z 2019-04-15T18:04:34Z CONTRIBUTOR

Ok I have submitted a PR for the xarray.Dataset.get function.

@dcherian, I was not able to find that issue you mentioned. I would certainly be interested to have a look in the future.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  drop all but specified data_variables/coordinates as a convenience function 431584027
483341164 https://github.com/pydata/xarray/issues/422#issuecomment-483341164 https://api.github.com/repos/pydata/xarray/issues/422 MDEyOklzc3VlQ29tbWVudDQ4MzM0MTE2NA== jbusecke 14314623 2019-04-15T17:18:17Z 2019-04-15T17:18:17Z CONTRIBUTOR

Point taken. I am still not thinking general enough :-)

Are we going to require that the argument to weighted is a DataArray that shares at least one dimension with da?

This sounds good to me.

With regard to the implementation, I thought of orienting myself along the lines of groupby, rolling or resample. Or are there any concerns for this specific method?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add average function 84127296
482719668 https://github.com/pydata/xarray/issues/422#issuecomment-482719668 https://api.github.com/repos/pydata/xarray/issues/422 MDEyOklzc3VlQ29tbWVudDQ4MjcxOTY2OA== jbusecke 14314623 2019-04-12T20:54:23Z 2019-04-12T20:54:23Z CONTRIBUTOR

I have to say that I am still pretty bad at thinking fully object orientented, but is this what we want in general? A subclass of xr.DataArray which gets initialized with a weight array and with some logic for nans then 'knows' about the weight count? Where would I find a good analogue for this sort of organization? In the rolling class?

I like the syntax proposed by @jhamman above, but I am wondering what happens in a slightly modified example: ```

da.shape (72, 10, 15) da.dims ('time', 'x', 'y') weights = some_func_of_x(x) da.weighted(weights).mean(dim=('x', 'y')) `` I think we should maybe build in a warning that when theweights` array does not contain both of the average dimensions?

It was mentioned that the functions on ...weighted(), would have to be mostly rewritten since the logic for a weigthed average and std differs. What other functions should be included (if any)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add average function 84127296
481945488 https://github.com/pydata/xarray/issues/422#issuecomment-481945488 https://api.github.com/repos/pydata/xarray/issues/422 MDEyOklzc3VlQ29tbWVudDQ4MTk0NTQ4OA== jbusecke 14314623 2019-04-11T02:55:06Z 2019-04-11T02:55:06Z CONTRIBUTOR

Found this issue due to @rabernats blogpost. This is a much requested feature in our working group, and it would be great to build onto it in xgcm aswell. I would be very keen to help this advance.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add average function 84127296
481771119 https://github.com/pydata/xarray/issues/2884#issuecomment-481771119 https://api.github.com/repos/pydata/xarray/issues/2884 MDEyOklzc3VlQ29tbWVudDQ4MTc3MTExOQ== jbusecke 14314623 2019-04-10T16:48:39Z 2019-04-10T16:48:39Z CONTRIBUTOR

Wow. Thats awesome. Had no clue about it. I will put in a PR for the docs for sure. Might take a bit though. Ill also take a look at the keep_vars option.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  drop all but specified data_variables/coordinates as a convenience function 431584027
481417930 https://github.com/pydata/xarray/issues/2867#issuecomment-481417930 https://api.github.com/repos/pydata/xarray/issues/2867 MDEyOklzc3VlQ29tbWVudDQ4MTQxNzkzMA== jbusecke 14314623 2019-04-09T20:16:18Z 2019-04-09T20:16:18Z CONTRIBUTOR

Could you think of a way I would be able to diagnose this further? Sorry for these wide questions but I am not very familiar with these xarray internals.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Very slow coordinate assignment with dask array 429511994
470248465 https://github.com/pydata/xarray/pull/2778#issuecomment-470248465 https://api.github.com/repos/pydata/xarray/issues/2778 MDEyOklzc3VlQ29tbWVudDQ3MDI0ODQ2NQ== jbusecke 14314623 2019-03-06T19:44:24Z 2019-03-06T19:44:43Z CONTRIBUTOR

Oh yeah, that seems totally fair to me. Thanks for clarifying. Cant wait to have this functionality! Thanks @spencerkclark

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add support for cftime.datetime coordinates with coarsen 412078232
470226713 https://github.com/pydata/xarray/pull/2778#issuecomment-470226713 https://api.github.com/repos/pydata/xarray/issues/2778 MDEyOklzc3VlQ29tbWVudDQ3MDIyNjcxMw== jbusecke 14314623 2019-03-06T18:45:16Z 2019-03-06T18:45:16Z CONTRIBUTOR

Oh sweet, I just encountered this problem. Would this work on a large dask array with a non-dask time dimension?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add support for cftime.datetime coordinates with coarsen 412078232
465760182 https://github.com/pydata/xarray/issues/1467#issuecomment-465760182 https://api.github.com/repos/pydata/xarray/issues/1467 MDEyOklzc3VlQ29tbWVudDQ2NTc2MDE4Mg== jbusecke 14314623 2019-02-20T21:25:01Z 2019-02-20T21:25:01Z CONTRIBUTOR

I have run into this problem multiple times. The latest example I found were some [CORE ocean model runs] (https://rda.ucar.edu/datasets/ds262.0/index.html#!description). The time dimension of some (they mix units) of these files is given as netcdf MRI-A_sigma0_monthly { dimensions: level = 51 ; latitude = 368 ; longitude = 364 ; time = UNLIMITED ; // (720 currently) time_bnds = 2 ; variables: double latitude(latitude) ; latitude:units = "degrees_north " ; latitude:axis = "Y" ; double longitude(longitude) ; longitude:units = "degrees_east " ; longitude:axis = "X" ; double level(level) ; level:units = "m " ; level:axis = "Z" ; double time(time) ; time:units = "years since 1948-1-1 00:00:00 " ; time:axis = "T" ; time:bounds = "time_bnds" ; time:calendar = "noleap" ; double time_bnds(time, time_bnds) ; float sigma0(time, level, latitude, longitude) ; sigma0:units = "kg/m^3 " ; sigma0:long_name = "Monthly-mean potential density (sigma-0) " ; sigma0:missing_value = -9.99e+33f ; }

I understand that 'fully' supporting to decode this unit is hard and should probably addressed upstream.

But I think it might be useful to have a utility function that converts a dataset with these units into someting quickly useable with xarray? E.g. one could load the dataset with ds = xr.open_dataset(..., decode_times=False) and then maybe call xr.decode_funky_units(ds, units='calendaryears', ...), which could default to the first day of a year (or the first day of a month for units of months since.

This way the user is aware that something is not decoded exactly, but can work with the data. Is this something that people could see useful here? Id be happy to give an implementation a shot if there is interest.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF conventions for time doesn't support years 238990919
461631993 https://github.com/pydata/xarray/pull/2665#issuecomment-461631993 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MTYzMTk5Mw== jbusecke 14314623 2019-02-07T23:18:24Z 2019-02-07T23:18:24Z CONTRIBUTOR

Is there anything else that I need to do at this point? Sorry for the xarray noob question...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
461414910 https://github.com/pydata/xarray/pull/2665#issuecomment-461414910 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MTQxNDkxMA== jbusecke 14314623 2019-02-07T13:18:07Z 2019-02-07T13:18:07Z CONTRIBUTOR

Awesome. Just added the line. Let me know if you think it is appropriate.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
461066722 https://github.com/pydata/xarray/pull/2665#issuecomment-461066722 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MTA2NjcyMg== jbusecke 14314623 2019-02-06T15:33:36Z 2019-02-06T15:33:36Z CONTRIBUTOR

Thanks. I updated the PR accordingly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
460902688 https://github.com/pydata/xarray/pull/2665#issuecomment-460902688 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MDkwMjY4OA== jbusecke 14314623 2019-02-06T05:08:09Z 2019-02-06T05:08:09Z CONTRIBUTOR

Seems like the travis builds all pass, wohoo. Please let me know if anything else is needed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
460803180 https://github.com/pydata/xarray/pull/2665#issuecomment-460803180 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MDgwMzE4MA== jbusecke 14314623 2019-02-05T21:05:17Z 2019-02-05T21:05:17Z CONTRIBUTOR

I think I have addressed all the above remarks (Many thanks for the thorough review and tips). Waiting for the CI again.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
460777924 https://github.com/pydata/xarray/pull/2665#issuecomment-460777924 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MDc3NzkyNA== jbusecke 14314623 2019-02-05T19:47:56Z 2019-02-05T19:47:56Z CONTRIBUTOR

I am smh getting these errors in the backend part of the tests:

``` ======================================================== FAILURES ========================================================= _____ TestNetCDF3ViaNetCDF4Data.test_encoding_same_dtype ______

self = <xarray.tests.test_backends.TestNetCDF3ViaNetCDF4Data object at 0xd238d9a20>

def test_encoding_same_dtype(self):
    ds = Dataset({'x': ('y', np.arange(10.0, dtype='f4'))})
    kwargs = dict(encoding={'x': {'dtype': 'f4'}})
    with self.roundtrip(ds, save_kwargs=kwargs) as actual:
      assert actual.x.encoding['dtype'] == 'f4'

E AssertionError: assert dtype('>f4') == 'f4'

xarray/tests/test_backends.py:853: AssertionError _______ TestGenericNetCDFData.test_encoding_same_dtype ________

self = <xarray.tests.test_backends.TestGenericNetCDFData object at 0xd238e0588>

def test_encoding_same_dtype(self):
    ds = Dataset({'x': ('y', np.arange(10.0, dtype='f4'))})
    kwargs = dict(encoding={'x': {'dtype': 'f4'}})
    with self.roundtrip(ds, save_kwargs=kwargs) as actual:
      assert actual.x.encoding['dtype'] == 'f4'

E AssertionError: assert dtype('>f4') == 'f4'

xarray/tests/test_backends.py:853: AssertionError ``` They do not always show up...not sure what to make of it, but could be an issue with my local environment. Lets see if the CI shows this aswell.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
460744700 https://github.com/pydata/xarray/pull/2665#issuecomment-460744700 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ2MDc0NDcwMA== jbusecke 14314623 2019-02-05T18:14:42Z 2019-02-05T18:14:42Z CONTRIBUTOR

Ok I think I have most of the things covered. All test pass for me locally. What should I add to the whats-new.rst. I thought of something like this under Enhancements (or would this be considered a bug fix?): Internal plotting now supports cftime.datetime objects as time axis (@spencerkclark, @jbusecke)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
457785427 https://github.com/pydata/xarray/pull/2665#issuecomment-457785427 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1Nzc4NTQyNw== jbusecke 14314623 2019-01-26T01:02:47Z 2019-01-26T01:03:13Z CONTRIBUTOR

Great idea to simplify @spencerkclark. Thanks. Regarding the tests. I have removed the following:

``` @requires_cftime def test_plot_cftime_coordinate_error(): cftime = _import_cftime() time = cftime.num2date(np.arange(5), units='days since 0001-01-01', calendar='noleap') data = DataArray(np.arange(5), coords=[time], dims=['time']) with raises_regex(TypeError, 'requires coordinates to be numeric or dates'): data.plot()

@requires_cftime def test_plot_cftime_data_error(): cftime = _import_cftime() data = cftime.num2date(np.arange(5), units='days since 0001-01-01', calendar='noleap') data = DataArray(data, coords=[np.arange(5)], dims=['x']) with raises_regex(NotImplementedError, 'cftime.datetime'): data.plot() ``` And the test suite passes locally.

But I assume Ill have to add another test dataset with a cftime.datetime time-axis, which then gets dragged through all the plotting tests? Where would I have to put that in?

Many thanks for all the help

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
457768114 https://github.com/pydata/xarray/pull/2665#issuecomment-457768114 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1Nzc2ODExNA== jbusecke 14314623 2019-01-25T23:18:16Z 2019-01-25T23:21:13Z CONTRIBUTOR

I have quickly looked into the testing and found an oddity that might be important if nc-time-axis is not installed.

So in the definition of plot in plot.py I have changed

if contains_cftime_datetimes(darray): to if any([contains_cftime_datetimes(darray[dim]) for dim in darray.dims]): Because if I understand correctly, the previous statement only checks the dtype of the actual data, not the dimensions. Is this appropriate or am I misunderstanding the syntax? In my example above it doesnt matter, because this only spits out an error message when nc-time-axis is not available.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
457766633 https://github.com/pydata/xarray/pull/2665#issuecomment-457766633 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1Nzc2NjYzMw== jbusecke 14314623 2019-01-25T23:10:48Z 2019-01-25T23:10:48Z CONTRIBUTOR

Ok so the plotting works now with both timeseries and 2d data as follows

``` import xarray as xr import numpy as np %matplotlib inline

Create a simple line dataarray with cftime

time = xr.cftime_range(start='2000', periods=4, freq='1H', calendar='noleap') data = np.random.rand(len(time)) da = xr.DataArray(data, coords=[('time', time)]) da.plot() ![download](https://user-images.githubusercontent.com/14314623/51777752-6d427580-20cc-11e9-9c3d-a91d31b10312.png)

Check with 2d data

time = xr.cftime_range(start='2000', periods=6, freq='2MS', calendar='noleap') data2 = np.random.rand(len(time), 4) da2 = xr.DataArray(data2, coords=[('time', time), ('other', range(4))]) da2.plot() ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
457723510 https://github.com/pydata/xarray/pull/2665#issuecomment-457723510 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1NzcyMzUxMA== jbusecke 14314623 2019-01-25T20:51:30Z 2019-01-25T20:51:30Z CONTRIBUTOR

Cool. Ill give it a shot right now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
457196788 https://github.com/pydata/xarray/pull/2665#issuecomment-457196788 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1NzE5Njc4OA== jbusecke 14314623 2019-01-24T13:30:24Z 2019-01-24T13:30:24Z CONTRIBUTOR

Sounds good to me.

Best

Julius On Jan 23, 2019, 12:56 PM -0500, Spencer Clark notifications@github.com, wrote:

I agree @dcherian; I just pinged the PR again, but if there is no activity there by this time next week, I think we should probably move forward here. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
456883187 https://github.com/pydata/xarray/pull/2665#issuecomment-456883187 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1Njg4MzE4Nw== jbusecke 14314623 2019-01-23T17:02:14Z 2019-01-23T17:02:14Z CONTRIBUTOR

Is there still interest in this PR? Or did the upstream changes move ahead? I am finding myself explaining workarounds for this too students in the department, so maybe my time would be better invested getting this fix to the full community?

But obviously if things are going to be fixed upstream soon, I would devote time to other projects. Thoughts?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
453298775 https://github.com/pydata/xarray/pull/2665#issuecomment-453298775 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1MzI5ODc3NQ== jbusecke 14314623 2019-01-10T23:24:52Z 2019-01-10T23:24:52Z CONTRIBUTOR

Oh shoot, I now remember seeing this. If this will be implemented soon I guess the PR can be discarded. Any chance you would have a quick solution for the pcolormesh plot error (second example in the PR) @spencerkclark?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
453282855 https://github.com/pydata/xarray/pull/2665#issuecomment-453282855 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1MzI4Mjg1NQ== jbusecke 14314623 2019-01-10T22:38:47Z 2019-01-10T22:38:47Z CONTRIBUTOR

One of the more general questions I had was if we should expose the conversion using nc-time-axis in the public API. That way users could easily plot the data in matplotlib, e.g.: da_new = da.convert_cftime() plt.plot(da_new.time, da_new) Just an idea...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
453280699 https://github.com/pydata/xarray/pull/2665#issuecomment-453280699 https://api.github.com/repos/pydata/xarray/issues/2665 MDEyOklzc3VlQ29tbWVudDQ1MzI4MDY5OQ== jbusecke 14314623 2019-01-10T22:31:48Z 2019-01-10T22:32:16Z CONTRIBUTOR

I have been along the lines of a short example. This works for timeseries data.

``` import xarray as xr import numpy as np %matplotlib inline

Create a simple line dataarray with cftime

time = xr.cftime_range(start='2000', periods=6, freq='2MS', calendar='noleap') data = np.random.rand(len(time)) da = xr.DataArray(data, coords=[('time', time)]) da.plot() ```

For pcolormesh plots this still fails.

```

Create a simple line dataarray with cftime

time = xr.cftime_range(start='2000', periods=6, freq='2MS', calendar='noleap') data2 = np.random.rand(len(time), 4) da2 = xr.DataArray(data2, coords=[('time', time), ('other', range(4))]) da2.plot() ```

--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-645c66b57bde> in <module> 3 data2 = np.random.rand(len(time), 4) 4 da2 = xr.DataArray(data2, coords=[('time', time), ('other', range(4))]) ----> 5 da2.plot() ~/Work/CODE/PYTHON/xarray/xarray/plot/plot.py in __call__(self, **kwargs) 585 586 def __call__(self, **kwargs): --> 587 return plot(self._da, **kwargs) 588 589 @functools.wraps(hist) ~/Work/CODE/PYTHON/xarray/xarray/plot/plot.py in plot(darray, row, col, col_wrap, ax, hue, rtol, subplot_kws, **kwargs) 220 kwargs['ax'] = ax 221 --> 222 return plotfunc(darray, **kwargs) 223 224 ~/Work/CODE/PYTHON/xarray/xarray/plot/plot.py in newplotfunc(darray, x, y, figsize, size, aspect, ax, row, col, col_wrap, xincrease, yincrease, add_colorbar, add_labels, vmin, vmax, cmap, center, robust, extend, levels, infer_intervals, colors, subplot_kws, cbar_ax, cbar_kwargs, xscale, yscale, xticks, yticks, xlim, ylim, norm, **kwargs) 887 vmax=cmap_params['vmax'], 888 norm=cmap_params['norm'], --> 889 **kwargs) 890 891 # Label the plot with metadata ~/Work/CODE/PYTHON/xarray/xarray/plot/plot.py in pcolormesh(x, y, z, ax, infer_intervals, **kwargs) 1135 (np.shape(y)[0] == np.shape(z)[0])): 1136 if len(y.shape) == 1: -> 1137 y = _infer_interval_breaks(y, check_monotonic=True) 1138 else: 1139 # we have to infer the intervals on both axes ~/Work/CODE/PYTHON/xarray/xarray/plot/plot.py in _infer_interval_breaks(coord, axis, check_monotonic) 1085 coord = np.asarray(coord) 1086 -> 1087 if check_monotonic and not _is_monotonic(coord, axis=axis): 1088 raise ValueError("The input coordinate is not sorted in increasing " 1089 "order along axis %d. This can lead to unexpected " ~/Work/CODE/PYTHON/xarray/xarray/plot/plot.py in _is_monotonic(coord, axis) 1069 n = coord.shape[axis] 1070 delta_pos = (coord.take(np.arange(1, n), axis=axis) >= -> 1071 coord.take(np.arange(0, n - 1), axis=axis)) 1072 delta_neg = (coord.take(np.arange(1, n), axis=axis) <= 1073 coord.take(np.arange(0, n - 1), axis=axis)) TypeError: '>=' not supported between instances of 'CalendarDateTime' and 'CalendarDateTime'

Perhaps @spencerkclark has an idea how to deal with differencing cftime.datetime objects?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  enable internal plotting with cftime datetime 398041758
453278625 https://github.com/pydata/xarray/issues/2164#issuecomment-453278625 https://api.github.com/repos/pydata/xarray/issues/2164 MDEyOklzc3VlQ29tbWVudDQ1MzI3ODYyNQ== jbusecke 14314623 2019-01-10T22:24:50Z 2019-01-10T22:24:50Z CONTRIBUTOR

I have taken a swing at restoring the internal plotting capabilities in #2665. Feedback would be very much appreciated since I am still very unfamiliar with the xarray plotting internals.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  holoviews / bokeh doesn't like cftime coords 324740017
439892007 https://github.com/pydata/xarray/issues/2525#issuecomment-439892007 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzOTg5MjAwNw== jbusecke 14314623 2018-11-19T13:26:45Z 2018-11-19T13:26:45Z CONTRIBUTOR

I think mean would be a good default (thinking about cell center dimensions like longitude and latitude) but I would very much like it if other functions could be specified e. g. for grid face dimensions (where min and max would be more appropriate) and other coordinates like surface area (where sum would be the most appropriate function).

On Nov 18, 2018, at 11:13 PM, Ryan Abernathey notifications@github.com wrote:

What would the coordinates look like?

apply func also for coordinate always apply mean to coordinate If I think about my applications, I would probably always want to apply mean to dimension coordinates, but would like to be able to choose for non-dimension coordinates.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758
435201618 https://github.com/pydata/xarray/issues/2525#issuecomment-435201618 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzNTIwMTYxOA== jbusecke 14314623 2018-11-01T21:59:19Z 2018-11-01T21:59:19Z CONTRIBUTOR

My favorite would be da.coarsen({'lat': 2, 'lon': 2}).mean(), but all the others sound reasonable to me. Also +1 for consistency with resample/rolling/groupby.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758
434531970 https://github.com/pydata/xarray/issues/2525#issuecomment-434531970 https://api.github.com/repos/pydata/xarray/issues/2525 MDEyOklzc3VlQ29tbWVudDQzNDUzMTk3MA== jbusecke 14314623 2018-10-31T01:46:19Z 2018-10-31T01:46:19Z CONTRIBUTOR

I agree with @rabernat, and favor the index based approach. For regular lon-lat grids its easy enough to implement a weighted mean, and for irregular spaced grids and other exotic grids the coordinate based approach might lead to errors. To me the resample API above might suggest to some users that some proper regridding (a la xESMF) onto a regular lat/lon grid is performed.

‚block_reduce‘ sounds good to me and sounds appropriate for non-dask arrays. Does anyone have experience how ‚dask.coarsen‘ compares performance wise?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Multi-dimensional binning/resampling/coarsening 375126758
433510805 https://github.com/pydata/xarray/issues/1192#issuecomment-433510805 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDQzMzUxMDgwNQ== jbusecke 14314623 2018-10-26T18:59:07Z 2018-10-26T18:59:07Z CONTRIBUTOR

I should add that I would be happy to work on an implementation, but probably need a good amount of pointers.

Here is the implementation that I have been using (only works with dask.arrays at this point).

Should have posted that earlier to avoid @rabernat s zingers over here. ```python def aggregate(da, blocks, func=np.nanmean, debug=False): """ Performs efficient block averaging in one or multiple dimensions. Only works on regular grid dimensions. Parameters ---------- da : xarray DataArray (must be a dask array!) blocks : list List of tuples containing the dimension and interval to aggregate over func : function Aggregation function.Defaults to numpy.nanmean Returns ------- da_agg : xarray Data Aggregated array Examples -------- >>> from xarrayutils import aggregate >>> import numpy as np >>> import xarray as xr >>> import matplotlib.pyplot as plt >>> %matplotlib inline >>> import dask.array as da >>> x = np.arange(-10,10) >>> y = np.arange(-10,10) >>> xx,yy = np.meshgrid(x,y) >>> z = xx2-yy2 >>> a = xr.DataArray(da.from_array(z, chunks=(20, 20)), coords={'x':x,'y':y}, dims=['y','x']) >>> print a <xarray.DataArray 'array-7e422c91624f207a5f7ebac426c01769' (y: 20, x: 20)> dask.array<array-7..., shape=(20, 20), dtype=int64, chunksize=(20, 20)> Coordinates: * y (y) int64 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 * x (x) int64 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 >>> blocks = [('x',2),('y',5)] >>> a_coarse = aggregate(a,blocks,func=np.mean) >>> print a_coarse <xarray.DataArray 'array-7e422c91624f207a5f7ebac426c01769' (y: 2, x: 10)> dask.array<coarsen..., shape=(2, 10), dtype=float64, chunksize=(2, 10)> Coordinates: * y (y) int64 -10 0 * x (x) int64 -10 -8 -6 -4 -2 0 2 4 6 8 Attributes: Coarsened with: <function mean at 0x111754230> Coarsenblocks: [('x', 2), ('y', 10)] """ # Check if the input is a dask array (I might want to convert this # automaticlaly in the future) if not isinstance(da.data, Array): raise RuntimeError('data array data must be a dask array') # Check data type of blocks # TODO write test if (not all(isinstance(n[0], str) for n in blocks) or not all(isinstance(n[1], int) for n in blocks)):

    print('blocks input', str(blocks))
    raise RuntimeError("block dimension must be dtype(str), \
    e.g. ('lon',4)")

# Check if the given array has the dimension specified in blocks
try:
    block_dict = dict((da.get_axis_num(x), y) for x, y in blocks)
except ValueError:
    raise RuntimeError("'blocks' contains non matching dimension")

# Check the size of the excess in each aggregated axis
blocks = [(a[0], a[1], da.shape[da.get_axis_num(a[0])] % a[1])
          for a in blocks]

# for now default to trimming the excess
da_coarse = coarsen(func, da.data, block_dict, trim_excess=True)

# for now default to only the dims
new_coords = dict([])
# for cc in da.coords.keys():
warnings.warn("WARNING: only dimensions are carried over as coordinates")
for cc in list(da.dims):
    new_coords[cc] = da.coords[cc]
    for dd in blocks:
        if dd[0] in list(da.coords[cc].dims):
            new_coords[cc] = \
                new_coords[cc].isel(
                    **{dd[0]: slice(0, -(1 + dd[2]), dd[1])})

attrs = {'Coarsened with': str(func), 'Coarsenblocks': str(blocks)}
da_coarse = xr.DataArray(da_coarse, dims=da.dims, coords=new_coords,
                         name=da.name, attrs=attrs)
return da_coarse

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
433160023 https://github.com/pydata/xarray/issues/1192#issuecomment-433160023 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDQzMzE2MDAyMw== jbusecke 14314623 2018-10-25T18:35:57Z 2018-10-25T18:35:57Z CONTRIBUTOR

Is this feature still being considered? A big +1 from me.

I wrote my own function to achieve this (using dask.array.coarsen), but I was planning to implement a similar functionality in xgcm, and it would be ideal if we could use an upstream implementation from xarray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
423540830 https://github.com/pydata/xarray/issues/2406#issuecomment-423540830 https://api.github.com/repos/pydata/xarray/issues/2406 MDEyOklzc3VlQ29tbWVudDQyMzU0MDgzMA== jbusecke 14314623 2018-09-21T13:59:52Z 2018-09-21T13:59:52Z CONTRIBUTOR

I would prefer axis_aspect to allow other than square aspects.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Achieving square aspect for Facetgrid heatmaps 357970488
419411028 https://github.com/pydata/xarray/issues/2406#issuecomment-419411028 https://api.github.com/repos/pydata/xarray/issues/2406 MDEyOklzc3VlQ29tbWVudDQxOTQxMTAyOA== jbusecke 14314623 2018-09-07T11:28:09Z 2018-09-07T11:28:09Z CONTRIBUTOR

I like the idea. I would prefer aspect_square=True to be more clear.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Achieving square aspect for Facetgrid heatmaps 357970488
418744229 https://github.com/pydata/xarray/pull/2397#issuecomment-418744229 https://api.github.com/repos/pydata/xarray/issues/2397 MDEyOklzc3VlQ29tbWVudDQxODc0NDIyOQ== jbusecke 14314623 2018-09-05T14:07:36Z 2018-09-05T14:07:36Z CONTRIBUTOR

I am unsure if that failure is due to a time out or changes in the PR. Is ther anything else that I should change before merge? Again, many thanks for the help in getting this forward.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add options for nondivergent and divergent cmap 356546301
418320682 https://github.com/pydata/xarray/pull/2397#issuecomment-418320682 https://api.github.com/repos/pydata/xarray/issues/2397 MDEyOklzc3VlQ29tbWVudDQxODMyMDY4Mg== jbusecke 14314623 2018-09-04T10:39:08Z 2018-09-04T10:39:08Z CONTRIBUTOR

Yikes. Also sorry for this merge fail commit. I am sitting in a conference :-).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add options for nondivergent and divergent cmap 356546301
418319166 https://github.com/pydata/xarray/pull/2397#issuecomment-418319166 https://api.github.com/repos/pydata/xarray/issues/2397 MDEyOklzc3VlQ29tbWVudDQxODMxOTE2Ng== jbusecke 14314623 2018-09-04T10:32:27Z 2018-09-04T10:32:27Z CONTRIBUTOR

Awesome. Thanks a lot for the feedback @dcherian.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add options for nondivergent and divergent cmap 356546301
418254612 https://github.com/pydata/xarray/pull/2397#issuecomment-418254612 https://api.github.com/repos/pydata/xarray/issues/2397 MDEyOklzc3VlQ29tbWVudDQxODI1NDYxMg== jbusecke 14314623 2018-09-04T06:25:09Z 2018-09-04T06:25:09Z CONTRIBUTOR

Yes that is probably more consistent @dcherian. Ill change it in a bit.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add options for nondivergent and divergent cmap 356546301
418151476 https://github.com/pydata/xarray/issues/2394#issuecomment-418151476 https://api.github.com/repos/pydata/xarray/issues/2394 MDEyOklzc3VlQ29tbWVudDQxODE1MTQ3Ng== jbusecke 14314623 2018-09-03T15:57:33Z 2018-09-03T15:57:33Z CONTRIBUTOR

It seems like the problems I encountered during testing are caused by my local setup after all. The Travis CI passed. Just added the test for the divergent colormap.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Change default colormaps 356067160
418148565 https://github.com/pydata/xarray/issues/2394#issuecomment-418148565 https://api.github.com/repos/pydata/xarray/issues/2394 MDEyOklzc3VlQ29tbWVudDQxODE0ODU2NQ== jbusecke 14314623 2018-09-03T15:42:33Z 2018-09-03T15:42:33Z CONTRIBUTOR

I took a shot at it in #2397. Setting the options works locally on my laptop, but I am not clear how to properly test it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Change default colormaps 356067160
403950093 https://github.com/pydata/xarray/issues/2164#issuecomment-403950093 https://api.github.com/repos/pydata/xarray/issues/2164 MDEyOklzc3VlQ29tbWVudDQwMzk1MDA5Mw== jbusecke 14314623 2018-07-10T20:10:16Z 2018-07-10T20:10:16Z CONTRIBUTOR

I encountered this problem right now with the xarray built-in plotting. Does anybody know a workaround for the xarray plotting by any chance?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  holoviews / bokeh doesn't like cftime coords 324740017
390513494 https://github.com/pydata/xarray/pull/2151#issuecomment-390513494 https://api.github.com/repos/pydata/xarray/issues/2151 MDEyOklzc3VlQ29tbWVudDM5MDUxMzQ5NA== jbusecke 14314623 2018-05-20T21:17:51Z 2018-05-20T21:17:51Z CONTRIBUTOR

This looks good to me! thanks for the implementation! This will save lots of seconds that add up ;-)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Plot labels use CF convention information if available. 324099923
373123959 https://github.com/pydata/xarray/issues/1823#issuecomment-373123959 https://api.github.com/repos/pydata/xarray/issues/1823 MDEyOklzc3VlQ29tbWVudDM3MzEyMzk1OQ== jbusecke 14314623 2018-03-14T18:16:38Z 2018-03-14T18:16:38Z CONTRIBUTOR

Awesome, thanks for the clarification. I just looked at #1981 and it seems indeed very elegant (in fact I just now used this approach to parallelize printing of movie frames!) Thanks for that!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  We need a fast path for open_mfdataset 288184220

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 21.593ms · About: xarray-datasette