issue_comments
2 rows where issue = 253107677 and user = 15331990 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Binary operations with ds.groupby('time.dayofyear') errors out, but ds.groupby('time.month') works · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
325163311 | https://github.com/pydata/xarray/issues/1527#issuecomment-325163311 | https://api.github.com/repos/pydata/xarray/issues/1527 | MDEyOklzc3VlQ29tbWVudDMyNTE2MzMxMQ== | ahuang11 15331990 | 2017-08-26T21:38:35Z | 2017-08-26T21:38:35Z | CONTRIBUTOR | I don't know if you tried this yet, but if you changed the length to 365 and keep it with non-leap year, it still errors out so I guess the root issue is with how time.dayofyear uses 366 days? ``` import xarray as xr import numpy as np import pandas as pd d1 = xr.DataArray(np.zeros(12000), [('time', pd.date_range('2004-01-01', freq='D', periods=12000))]) d2 = xr.DataArray(np.zeros((365, 10)), {'time': pd.date_range('1979-01-01', freq='D', periods=365), 'x': ('x', np.arange(10))}, dims=['time', 'x']) d1.groupby('time.month') * d2.groupby('time.month').mean('time') print('this works') no workd1.groupby('time.dayofyear') * d2.groupby('time.dayofyear').mean('time') print('this doesn\'t work') ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Binary operations with ds.groupby('time.dayofyear') errors out, but ds.groupby('time.month') works 253107677 | |
325149596 | https://github.com/pydata/xarray/issues/1527#issuecomment-325149596 | https://api.github.com/repos/pydata/xarray/issues/1527 | MDEyOklzc3VlQ29tbWVudDMyNTE0OTU5Ng== | ahuang11 15331990 | 2017-08-26T17:28:28Z | 2017-08-26T17:32:00Z | CONTRIBUTOR | Thanks for your quick response!
From If you swap the length it errors out. ``` import xarray as xr import numpy as np import pandas as pd d1 = xr.DataArray(np.zeros(12000), [('time', pd.date_range('1979-01-01', freq='D', periods=12000))]) d2 = xr.DataArray(np.zeros((366, 10)), {'time': pd.date_range('1979-01-01', freq='D', periods=366), 'x': ('x', np.arange(10))}, dims=['time', 'x']) d1.groupby('time.month') - d2.groupby('time.month').mean('time') print('this works') no workd1.groupby('time.dayofyear') - d2.groupby('time.dayofyear').mean('time') print('this doesn\'t work') ``` ``` <xarray.DataArray (time: 12000, x: 10)> array([[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.]]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 8 9 * time (time) datetime64[ns] 1979-01-01 1979-01-02 1979-01-03 ... month (time) int64 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ... this works KeyError Traceback (most recent call last) <ipython-input-24-4c92f88d0a14> in <module>() 10 11 # no work ---> 12 d1.groupby('time.dayofyear') - d2.groupby('time.dayofyear').mean('time') 13 print('this doesn\'t work') /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/groupby.py in func(self, other) 316 g = f if not reflexive else lambda x, y: f(y, x) 317 applied = self._yield_binary_applied(g, other) --> 318 combined = self._combine(applied) 319 return combined 320 return func /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/groupby.py in _combine(self, applied, shortcut) 532 combined = self._concat_shortcut(applied, dim, positions) 533 else: --> 534 combined = concat(applied, dim) 535 combined = _maybe_reorder(combined, dim, positions) 536 /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in concat(objs, dim, data_vars, coords, compat, positions, indexers, mode, concat_over) 118 raise TypeError('can only concatenate xarray Dataset and DataArray ' 119 'objects, got %s' % type(first_obj)) --> 120 return f(objs, dim, data_vars, coords, compat, positions) 121 122 /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in _dataarray_concat(arrays, dim, data_vars, coords, compat, positions) 304 305 ds = _dataset_concat(datasets, dim, data_vars, coords, compat, --> 306 positions) 307 return arrays[0]._from_temp_dataset(ds, name) 308 /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in _dataset_concat(datasets, dim, data_vars, coords, compat, positions) 210 datasets = align(*datasets, join='outer', copy=False, exclude=[dim]) 211 --> 212 concat_over = _calc_concat_over(datasets, dim, data_vars, coords) 213 214 def insert_result_variable(k, v): /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in _calc_concat_over(datasets, dim, data_vars, coords) 190 if dim in v.dims) 191 concat_over.update(process_subset_opt(data_vars, 'data_vars')) --> 192 concat_over.update(process_subset_opt(coords, 'coords')) 193 if dim in datasets[0]: 194 concat_over.add(dim) /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in process_subset_opt(opt, subset) 165 for ds in datasets[1:]) 166 # all nonindexes that are not the same in each dataset --> 167 concat_new = set(k for k in getattr(datasets[0], subset) 168 if k not in concat_over and differs(k)) 169 elif opt == 'all': /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in <genexpr>(.0) 166 # all nonindexes that are not the same in each dataset 167 concat_new = set(k for k in getattr(datasets[0], subset) --> 168 if k not in concat_over and differs(k)) 169 elif opt == 'all': 170 concat_new = (set(getattr(datasets[0], subset)) - /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in differs(vname) 163 v = datasets[0].variables[vname] 164 return any(not ds.variables[vname].equals(v) --> 165 for ds in datasets[1:]) 166 # all nonindexes that are not the same in each dataset 167 concat_new = set(k for k in getattr(datasets[0], subset) /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/combine.py in <genexpr>(.0) 163 v = datasets[0].variables[vname] 164 return any(not ds.variables[vname].equals(v) --> 165 for ds in datasets[1:]) 166 # all nonindexes that are not the same in each dataset 167 concat_new = set(k for k in getattr(datasets[0], subset) /data/keeling/a/ahuang11/anaconda3/lib/python3.6/site-packages/xarray/core/utils.py in getitem(self, key) 288 289 def getitem(self, key): --> 290 return self.mapping[key] 291 292 def iter(self): KeyError: 'x' ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Binary operations with ds.groupby('time.dayofyear') errors out, but ds.groupby('time.month') works 253107677 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1