issues
5 rows where comments = 10, repo = 13221727 and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1986643906 | I_kwDOAMm_X852acfC | 8437 | Restrict pint test runs | max-sixty 5635139 | open | 0 | 10 | 2023-11-10T00:50:52Z | 2023-11-13T21:57:45Z | MEMBER | What is your issue?Pint tests are failing on main — https://github.com/pydata/xarray/actions/runs/6817674274/job/18541677930
If we can't fix soon, should we disable? CC @keewis |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1865945636 | PR_kwDOAMm_X85YvIJ4 | 8114 | Move `.rolling_exp` functions from `reduce` to `apply_ufunc` | max-sixty 5635139 | closed | 0 | 10 | 2023-08-24T21:57:19Z | 2023-09-19T01:13:27Z | 2023-09-19T01:13:22Z | MEMBER | 0 | pydata/xarray/pulls/8114 |
A similar change should solve #6528, but let's get one finished first... ~Posting for discussion, will comment inline~ Ready for merge |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 207862981 | MDU6SXNzdWUyMDc4NjI5ODE= | 1270 | BUG: Resample on PeriodIndex not working? | max-sixty 5635139 | closed | 0 | 10 | 2017-02-15T16:56:21Z | 2020-05-30T02:34:17Z | 2020-05-30T02:34:17Z | MEMBER | ```python import xarray as xr import pandas as pd da = xr.DataArray(pd.Series(1, pd.period_range('2000-1', '2000-12', freq='W')).rename_axis('date')) da.resample('B', 'date', 'ffill') TypeError Traceback (most recent call last) <ipython-input-1-eb64a66a8d1f> in <module>() 3 da = xr.DataArray(pd.Series(1, pd.period_range('2000-1', '2000-12', freq='W')).rename_axis('date')) 4 ----> 5 da.resample('B', 'date', 'ffill') /Users/maximilian/drive/workspace/xarray/xarray/core/common.py in resample(self, freq, dim, how, skipna, closed, label, base, keep_attrs) 577 time_grouper = pd.TimeGrouper(freq=freq, how=how, closed=closed, 578 label=label, base=base) --> 579 gb = self.groupby_cls(self, group, grouper=time_grouper) 580 if isinstance(how, basestring): 581 f = getattr(gb, how) /Users/maximilian/drive/workspace/xarray/xarray/core/groupby.py in init(self, obj, group, squeeze, grouper, bins, cut_kwargs) 242 raise ValueError('index must be monotonic for resampling') 243 s = pd.Series(np.arange(index.size), index) --> 244 first_items = s.groupby(grouper).first() 245 if first_items.isnull().any(): 246 full_index = first_items.index /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/core/generic.py in groupby(self, by, axis, level, as_index, sort, group_keys, squeeze, kwargs) 3989 return groupby(self, by=by, axis=axis, level=level, as_index=as_index, 3990 sort=sort, group_keys=group_keys, squeeze=squeeze, -> 3991 kwargs) 3992 3993 def asfreq(self, freq, method=None, how=None, normalize=False): /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/core/groupby.py in groupby(obj, by, kwds) 1509 raise TypeError('invalid type: %s' % type(obj)) 1510 -> 1511 return klass(obj, by, kwds) 1512 1513 /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/core/groupby.py in init(self, obj, keys, axis, level, grouper, exclusions, selection, as_index, sort, group_keys, squeeze, **kwargs) 368 level=level, 369 sort=sort, --> 370 mutated=self.mutated) 371 372 self.obj = obj /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/core/groupby.py in _get_grouper(obj, key, axis, level, sort, mutated) 2390 # a passed-in Grouper, directly convert 2391 if isinstance(key, Grouper): -> 2392 binner, grouper, obj = key._get_grouper(obj) 2393 if key.key is None: 2394 return grouper, [], obj /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/tseries/resample.py in _get_grouper(self, obj) 1059 def _get_grouper(self, obj): 1060 # create the resampler and return our binner -> 1061 r = self._get_resampler(obj) 1062 r._set_binner() 1063 return r.binner, r.grouper, r.obj /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pandas/tseries/resample.py in _get_resampler(self, obj, kind) 1055 raise TypeError("Only valid with DatetimeIndex, " 1056 "TimedeltaIndex or PeriodIndex, " -> 1057 "but got an instance of %r" % type(ax).name) 1058 1059 def _get_grouper(self, obj): TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index' ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 325609580 | MDExOlB1bGxSZXF1ZXN0MTg5OTAwNDA1 | 2174 | Datasets more robust to non-string keys | max-sixty 5635139 | closed | 0 | 10 | 2018-05-23T08:53:36Z | 2018-05-28T01:44:02Z | 2018-05-27T20:48:31Z | MEMBER | 0 | pydata/xarray/pulls/2174 |
I don't think this is the most efficient way of doing this, though it does work. Any ideas for a more efficient implementation? |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 298437967 | MDExOlB1bGxSZXF1ZXN0MTcwMDc4MTkw | 1924 | isort | max-sixty 5635139 | closed | 0 | 10 | 2018-02-20T00:32:51Z | 2018-02-27T19:33:38Z | 2018-02-27T19:33:35Z | MEMBER | 0 | pydata/xarray/pulls/1924 | Not sure if we want this? Probably too strict to enforce on every commit, more permissible to do a one-time update, assuming it doesn't cause merge conflicts. You can get the same result by running |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1924/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);