issues
12 rows where comments = 6, state = "closed" and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1975574237 | I_kwDOAMm_X851wN7d | 8409 | Task graphs on `.map_blocks` with many chunks can be huge | max-sixty 5635139 | closed | 0 | 6 | 2023-11-03T07:14:45Z | 2024-01-03T04:10:16Z | 2024-01-03T04:10:16Z | MEMBER | What happened?I'm getting task graphs > 1GB, I think possibly because the full indexes are being included in every task? What did you expect to happen?Only the relevant sections of the index would be included Minimal Complete Verifiable Example```Python da = xr.tutorial.load_dataset('air_temperature') Dropping the index doesn't generally matter that much...len(cloudpickle.dumps(da.chunk(lat=1, lon=1))) 15569320len(cloudpickle.dumps(da.chunk().drop_vars(da.indexes))) 15477313But with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
866826033 | MDU6SXNzdWU4NjY4MjYwMzM= | 5215 | Add an Cumulative aggregation, similar to Rolling | max-sixty 5635139 | closed | 0 | 6 | 2021-04-24T19:59:49Z | 2023-12-08T22:06:53Z | 2023-12-08T22:06:53Z | MEMBER | Is your feature request related to a problem? Please describe. Pandas has a Describe the solution you'd like
Basically the same as pandas — a Describe alternatives you've considered Some options: – This – Don't add anything, the sugar isn't worth the additional API. – Go full out and write specialized expanding algos — which will be faster since they don't have to keep track of the window. But not that much faster, likely not worth the effort. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1878288525 | PR_kwDOAMm_X85ZYos5 | 8139 | Fix pandas' `interpolate(fill_value=)` error | max-sixty 5635139 | closed | 0 | 6 | 2023-09-02T02:41:45Z | 2023-09-28T16:48:51Z | 2023-09-04T18:05:14Z | MEMBER | 0 | pydata/xarray/pulls/8139 | Pandas no longer has a Weirdly I wasn't getting this locally, on pandas 2.1.0, only in CI on https://github.com/pydata/xarray/actions/runs/6054400455/job/16431747966?pr=8138. Removing it passes locally, let's see whether this works in CI Would close #8125 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
967854972 | MDExOlB1bGxSZXF1ZXN0NzEwMDA1NzY4 | 5694 | Ask PRs to annotate tests | max-sixty 5635139 | closed | 0 | 6 | 2021-08-12T02:19:28Z | 2023-09-28T16:46:19Z | 2023-06-19T05:46:36Z | MEMBER | 0 | pydata/xarray/pulls/5694 |
As discussed https://github.com/pydata/xarray/pull/5690#issuecomment-897280353 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
729208432 | MDExOlB1bGxSZXF1ZXN0NTA5NzM0NTM2 | 4540 | numpy_groupies | max-sixty 5635139 | closed | 0 | 6 | 2020-10-26T03:37:19Z | 2022-02-05T22:24:12Z | 2021-10-24T00:18:52Z | MEMBER | 0 | pydata/xarray/pulls/4540 |
Very early effort — I found this harder than I expected — I was trying to use the existing groupby infra, but think I maybe should start afresh. The result of the I also added some type signature / notes and I was going through the existing code; mostly for my own understanding If anyone has any thoughts, feel free to comment — otherwise I'll resume this soon |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4540/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
399164733 | MDExOlB1bGxSZXF1ZXN0MjQ0NjU3NTk5 | 2674 | Skipping variables in datasets that don't have the core dim | max-sixty 5635139 | closed | 0 | 6 | 2019-01-15T02:43:11Z | 2021-05-13T22:02:19Z | 2021-05-13T22:02:19Z | MEMBER | 0 | pydata/xarray/pulls/2674 | ref https://github.com/pydata/xarray/pull/2650#issuecomment-454164295 This seems an ugly way of accomplishing the goal; any ideas for a better way of doing this? And stepping back, do others think a) it's helpful to skip variables in a dataset, and b) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
298421965 | MDU6SXNzdWUyOTg0MjE5NjU= | 1923 | Local test failure in test_backends | max-sixty 5635139 | closed | 0 | 6 | 2018-02-19T22:53:37Z | 2020-09-05T20:32:17Z | 2020-09-05T20:32:17Z | MEMBER | I'm happy to debug this further but before I do, is this an issue people have seen before? I'm running tests on master and hit an issue very early on. FWIW I don't use netCDF, and don't think I've got that installed Code Sample, a copy-pastable example if possible```python ========================================================================== FAILURES ========================================================================== _________ ScipyInMemoryDataTest.test_bytesio_pickle __________ self = <xarray.tests.test_backends.ScipyInMemoryDataTest testMethod=test_bytesio_pickle>
xarray/tests/test_backends.py:1384: TypeError ``` Problem description[this should explain why the current behavior is a problem and why the expected output is a better solution.] Expected OutputSkip or pass backends tests Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
575088962 | MDExOlB1bGxSZXF1ZXN0MzgzMzAwMjgw | 3826 | Allow ellipsis to be used in stack | max-sixty 5635139 | closed | 0 | 6 | 2020-03-04T02:21:21Z | 2020-03-20T01:20:54Z | 2020-03-19T22:55:09Z | MEMBER | 0 | pydata/xarray/pulls/3826 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
577283480 | MDExOlB1bGxSZXF1ZXN0Mzg1MTA3OTU4 | 3846 | Doctests fixes | max-sixty 5635139 | closed | 0 | 6 | 2020-03-07T05:44:27Z | 2020-03-10T14:03:05Z | 2020-03-10T14:03:00Z | MEMBER | 0 | pydata/xarray/pulls/3846 |
Starting to get some fixes in. It's going to be a long journey though. I think maybe we whitelist some files and move gradually through before whitelisting the whole library. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
485437811 | MDU6SXNzdWU0ODU0Mzc4MTE= | 3265 | Sparse tests failing on master | max-sixty 5635139 | closed | 0 | 6 | 2019-08-26T20:34:21Z | 2019-08-27T00:01:18Z | 2019-08-27T00:01:07Z | MEMBER | https://dev.azure.com/xarray/xarray/_build/results?buildId=695 ```python =================================== FAILURES =================================== ___ TestSparseVariable.test_unary_op ___ self = <xarray.tests.test_sparse.TestSparseVariable object at 0x7f24f0b21b70>
xarray/tests/test_sparse.py:285: AttributeError ___ TestSparseVariable.test_univariate_ufunc _____ self = <xarray.tests.test_sparse.TestSparseVariable object at 0x7f24ebc2bb38>
xarray/tests/test_sparse.py:290: AttributeError ___ TestSparseVariable.test_bivariate_ufunc ______ self = <xarray.tests.test_sparse.TestSparseVariable object at 0x7f24f02a7e10>
xarray/tests/test_sparse.py:293: AttributeError ___ TestSparseVariable.testpickle ____ self = <xarray.tests.test_sparse.TestSparseVariable object at 0x7f24f04f2c50>
xarray/tests/test_sparse.py:307: AttributeError ``` Any ideas? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
457080809 | MDExOlB1bGxSZXF1ZXN0Mjg4OTY1MzQ4 | 3029 | Fix pandas-dev tests | max-sixty 5635139 | closed | 0 | 6 | 2019-06-17T18:15:16Z | 2019-06-28T15:31:33Z | 2019-06-28T15:31:28Z | MEMBER | 0 | pydata/xarray/pulls/3029 | Currently pandas-dev tests get 'stuck' on the conda install. The last instruction to run is the standard install:
And after installing the libraries, it prints this and then stops:
I'm not that familiar with conda. Anyone have any ideas as to why this would fail while the other builds would succeed? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
168901028 | MDU6SXNzdWUxNjg5MDEwMjg= | 934 | Should indexing be possible on 1D coords, even if not dims? | max-sixty 5635139 | closed | 0 | 6 | 2016-08-02T14:33:43Z | 2019-01-27T06:49:52Z | 2019-01-27T06:49:52Z | MEMBER | ``` python In [1]: arr = xr.DataArray(np.random.rand(4, 3), ...: ...: [('time', pd.date_range('2000-01-01', periods=4)), ...: ...: ('space', ['IA', 'IL', 'IN'])]) ...: ...: In [17]: arr.coords['space2'] = ('space', ['A','B','C']) In [18]: arr Out[18]: <xarray.DataArray (time: 4, space: 3)> array([[ 0.05187049, 0.04743067, 0.90329666], [ 0.59482538, 0.71014366, 0.86588207], [ 0.51893157, 0.49442107, 0.10697737], [ 0.16068189, 0.60756757, 0.31935279]]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 2000-01-03 2000-01-04 * space (space) |S2 'IA' 'IL' 'IN' space2 (space) |S1 'A' 'B' 'C' ``` Now try to select on the space2 coord: ``` python In [19]: arr.sel(space2='A') ValueError Traceback (most recent call last) <ipython-input-19-eae5e4b64758> in <module>() ----> 1 arr.sel(space2='A') /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/xarray/core/dataarray.pyc in sel(self, method, tolerance, indexers) 601 """ 602 return self.isel(indexing.remap_label_indexers( --> 603 self, indexers, method=method, tolerance=tolerance)) 604 605 def isel_points(self, dim='points', **indexers): /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/xarray/core/dataarray.pyc in isel(self, indexers) 588 DataArray.sel 589 """ --> 590 ds = self._to_temp_dataset().isel(indexers) 591 return self._from_temp_dataset(ds) 592 /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/xarray/core/dataset.pyc in isel(self, **indexers) 908 invalid = [k for k in indexers if k not in self.dims] 909 if invalid: --> 910 raise ValueError("dimensions %r do not exist" % invalid) 911 912 # all indexers should be int, slice or np.ndarrays ValueError: dimensions ['space2'] do not exist ``` Is there an easier way to do this? I couldn't think of anything... CC @justinkuosixty |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);