issue_comments: 359129344
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1844#issuecomment-359129344 | https://api.github.com/repos/pydata/xarray/issues/1844 | 359129344 | MDEyOklzc3VlQ29tbWVudDM1OTEyOTM0NA== | 1217238 | 2018-01-20T00:49:33Z | 2018-01-20T00:49:56Z | MEMBER | You can do this in a single step with np.random.seed(123) times = pd.date_range('2000-01-01', '2001-12-31', name='time') annual_cycle = np.sin(2 * np.pi * (np.array(times.dayofyear) / 365.25 - 0.28)) base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3) ds = xr.Dataset({'tmin': (('time', 'location'), tmin_values), 'tmax': (('time', 'location'), tmax_values)},((62, 3), (3,), (3,)) {'time': times, 'location': ['IA', 'IN', 'IL']}) new codeds_mean = ds.groupby('time.month').mean('time') ds_std = ds.groupby('time.month').std('time') xarray.apply_ufunc(lambda x, m, s: (x - m) / s, ds.groupby('time.month'), ds_mean, ds_std) ``` The other way (about twice as slow) is to chain two calls to I'll mark this as a documentation issue in case anyone wants to add an example to the docs. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
290023410 |