html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1844#issuecomment-417694660,https://api.github.com/repos/pydata/xarray/issues/1844,417694660,MDEyOklzc3VlQ29tbWVudDQxNzY5NDY2MA==,1217238,2018-08-31T15:09:56Z,2018-08-31T15:09:56Z,MEMBER,@chiaral You should take a look at CFTimeIndex which specifically was designed to solve this problem: http://xarray.pydata.org/en/stable/time-series.html#non-standard-calendars-and-dates-outside-the-timestamp-valid-range,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,290023410 https://github.com/pydata/xarray/issues/1844#issuecomment-359129344,https://api.github.com/repos/pydata/xarray/issues/1844,359129344,MDEyOklzc3VlQ29tbWVudDM1OTEyOTM0NA==,1217238,2018-01-20T00:49:33Z,2018-01-20T00:49:56Z,MEMBER,"You can do this in a single step with `xarray.apply_ufunc()`, which is a sort of more flexible/powerful interface to xarray's broadcasting arithmetic. Extending the [toy weather example](http://xarray.pydata.org/en/stable/examples/weather-data.html) from the docs: ```python import xarray as xr import numpy as np import pandas as pd import seaborn as sns # pandas aware plotting library np.random.seed(123) times = pd.date_range('2000-01-01', '2001-12-31', name='time') annual_cycle = np.sin(2 * np.pi * (np.array(times.dayofyear) / 365.25 - 0.28)) base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3) ds = xr.Dataset({'tmin': (('time', 'location'), tmin_values), 'tmax': (('time', 'location'), tmax_values)},((62, 3), (3,), (3,)) {'time': times, 'location': ['IA', 'IN', 'IL']}) # new code ds_mean = ds.groupby('time.month').mean('time') ds_std = ds.groupby('time.month').std('time') xarray.apply_ufunc(lambda x, m, s: (x - m) / s, ds.groupby('time.month'), ds_mean, ds_std) ``` The other way (about twice as slow) is to chain two calls to `groupby()`: ```python (ds.groupby('time.month') - ds_mean).groupby('time.month') / ds_std ``` I'll mark this as a documentation issue in case anyone wants to add an example to the docs.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,290023410