issues
12 rows where comments = 4, type = "issue" and user = 2448579 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2259316341 | I_kwDOAMm_X86Gqm51 | 8965 | Support concurrent loading of variables | dcherian 2448579 | open | 0 | 4 | 2024-04-23T16:41:24Z | 2024-04-29T22:21:51Z | MEMBER | Is your feature request related to a problem?Today if users have to concurrently load multiple variables in a DataArray or Dataset, they have to use dask. It struck me that it'd be pretty easy for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8965/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2027147099 | I_kwDOAMm_X854089b | 8523 | tree-reduce the combine for `open_mfdataset(..., parallel=True, combine="nested")` | dcherian 2448579 | open | 0 | 4 | 2023-12-05T21:24:51Z | 2023-12-18T19:32:39Z | MEMBER | Is your feature request related to a problem?When Instead we can tree-reduce the combine (example) by switching to
cc @TomNicholas |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1603957501 | I_kwDOAMm_X85fmnL9 | 7573 | Add optional min versions to conda-forge recipe (`run_constrained`) | dcherian 2448579 | closed | 0 | 4 | 2023-02-28T23:12:15Z | 2023-08-21T16:12:34Z | 2023-08-21T16:12:21Z | MEMBER | Is your feature request related to a problem?I opened this PR to add minimum versions for our optional dependencies: https://github.com/conda-forge/xarray-feedstock/pull/84/files to prevent issues like #7467 I think we'd need a policy to choose which ones to list. Here's the current list:
Some examples to think about:
1. Describe the solution you'd likeNo response Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7573/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1789989152 | I_kwDOAMm_X85qsREg | 7962 | Better chunk manager error | dcherian 2448579 | closed | 0 | 4 | 2023-07-05T17:27:25Z | 2023-07-24T22:26:14Z | 2023-07-24T22:26:13Z | MEMBER | What happened?I just ran in to this error in an environment without dask.
I think we could easily recommend the user to install a package that provides |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1760733017 | I_kwDOAMm_X85o8qdZ | 7924 | Migrate from nbsphinx to myst, myst-nb | dcherian 2448579 | open | 0 | 4 | 2023-06-16T14:17:41Z | 2023-06-20T22:07:42Z | MEMBER | Is your feature request related to a problem?I think we should switch to MyST markdown for our docs. I've been using MyST markdown and MyST-NB in docs in other projects and it works quite well. Advantages: 1. We get HTML reprs in the docs (example) which is a big improvement. (#6620) 2. I think many find markdown a lot easier to write than RST There's a tool to migrate RST to MyST (RTD's migration guide). Describe the solution you'd likeNo response Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7924/reactions", "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1119738354 | I_kwDOAMm_X85Cvdny | 6222 | test packaging & distribution | dcherian 2448579 | closed | 0 | 4 | 2022-01-31T17:42:40Z | 2022-02-03T15:45:17Z | 2022-02-03T15:45:17Z | MEMBER | Is your feature request related to a problem?It seems like we should have a test to make sure our dependencies are specified correctly. Describe the solution you'd likeFor instance we could add a step to the release workflow: https://github.com/pydata/xarray/blob/b09de8195a9e22dd35d1b7ed608ea15dad0806ef/.github/workflows/pypi-release.yaml#L34-L43 after Alternatively we could have another test config in our regular CI to build + import. Thoughts? Is this excessive for a somewhat rare problem? Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6222/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1072473598 | I_kwDOAMm_X84_7KX- | 6051 | Check for just ... in stack etc, and raise with a useful error message | dcherian 2448579 | closed | 0 | 4 | 2021-12-06T18:35:27Z | 2022-01-03T23:05:23Z | 2022-01-03T23:05:23Z | MEMBER | Is your feature request related to a problem? Please describe. The following doesn't work ``` python import xarray as xr da = xr.DataArray([[1,2],[1,2]], dims=("x", "y")) da.stack(flat=...) ``` Describe the solution you'd like
This could be equivalent to
I think using |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
502149236 | MDU6SXNzdWU1MDIxNDkyMzY= | 3371 | Add xr.unify_chunks top level method | dcherian 2448579 | closed | 0 | 4 | 2019-10-03T15:49:09Z | 2021-06-16T14:56:59Z | 2021-06-16T14:56:58Z | MEMBER | This should handle multiple DataArrays and Datasets. Implemented in #3276 as |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
636666706 | MDU6SXNzdWU2MzY2NjY3MDY= | 4146 | sparse upstream-dev test failures | dcherian 2448579 | closed | 0 | 4 | 2020-06-11T02:20:11Z | 2021-03-17T23:10:45Z | 2020-06-16T16:00:10Z | MEMBER | Here are three of the errors:
``` ``` ____ testdask_token ______
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
636665269 | MDU6SXNzdWU2MzY2NjUyNjk= | 4145 | Fix matplotlib in upstream-dev test config | dcherian 2448579 | closed | 0 | 4 | 2020-06-11T02:15:52Z | 2020-06-12T09:11:31Z | 2020-06-12T09:11:31Z | MEMBER | From @keewis comment in #4138
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
398152613 | MDU6SXNzdWUzOTgxNTI2MTM= | 2667 | datetime interpolation doesn't work | dcherian 2448579 | closed | 0 | 4 | 2019-01-11T06:45:55Z | 2019-02-11T09:47:09Z | 2019-02-11T09:47:09Z | MEMBER | Code Sample, a copy-pastable example if possibleThis code doesn't work anymore on master.
Problem descriptionThe above code now raises the error ```AttributeError Traceback (most recent call last) <ipython-input-26-dda3a6d5725b> in <module> 6 dims=['time'], 7 coords={'time': pd.date_range('01-01-2001', periods=50, freq='H')}) ----> 8 a.interp(x=xi, time=xi.time) ~/work/python/xarray/xarray/core/dataarray.py in interp(self, coords, method, assume_sorted, kwargs, coords_kwargs) 1032 ds = self._to_temp_dataset().interp( 1033 coords, method=method, kwargs=kwargs, assume_sorted=assume_sorted, -> 1034 coords_kwargs) 1035 return self._from_temp_dataset(ds) 1036 ~/work/python/xarray/xarray/core/dataset.py in interp(self, coords, method, assume_sorted, kwargs, coords_kwargs) 2008 in indexers.items() if k in var.dims} 2009 variables[name] = missing.interp( -> 2010 var, var_indexers, method, kwargs) 2011 elif all(d not in indexers for d in var.dims): 2012 # keep unrelated object array ~/work/python/xarray/xarray/core/missing.py in interp(var, indexes_coords, method, *kwargs) 468 new_dims = broadcast_dims + list(destination[0].dims) 469 interped = interp_func(var.transpose(original_dims).data, --> 470 x, destination, method, kwargs) 471 472 result = Variable(new_dims, interped, attrs=var.attrs) ~/work/python/xarray/xarray/core/missing.py in interp_func(var, x, new_x, method, kwargs) 535 new_axis=new_axis, drop_axis=drop_axis) 536 --> 537 return _interpnd(var, x, new_x, func, kwargs) 538 539 ~/work/python/xarray/xarray/core/missing.py in _interpnd(var, x, new_x, func, kwargs) 558 var = var.transpose(range(-len(x), var.ndim - len(x))) 559 # stack new_x to 1 vector, with reshape --> 560 xi = np.stack([x1.values.ravel() for x1 in new_x], axis=-1) 561 rslt = func(x, var, xi, **kwargs) 562 # move back the interpolation axes to the last position ~/work/python/xarray/xarray/core/missing.py in <listcomp>(.0) 558 var = var.transpose(range(-len(x), var.ndim - len(x))) 559 # stack new_x to 1 vector, with reshape --> 560 xi = np.stack([x1.values.ravel() for x1 in new_x], axis=-1) 561 rslt = func(x, var, xi, **kwargs) 562 # move back the interpolation axes to the last position AttributeError: 'numpy.ndarray' object has no attribute 'values' ``` I think the issue is this line which returns a numpy array instead of a Variable. This was added in the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
373955021 | MDU6SXNzdWUzNzM5NTUwMjE= | 2510 | Dataset-wide _FillValue | dcherian 2448579 | closed | 0 | 4 | 2018-10-25T13:44:46Z | 2018-10-25T17:39:35Z | 2018-10-25T17:37:26Z | MEMBER | I'm looking at a netCDF file that has the variable
and global attributes ``` // global attributes: :platform_code = "8n90e" ; :site_code = "8n90e" ; :wmo_platform_code = 23007 ; :array = "RAMA" ; :Request_for_acknowledgement = "If you use these data in publications or presentations, please acknowledge the GTMBA Project Office of NOAA/PMEL. Also, we would appreciate receiving a preprint and/or reprint of publications utilizing the data for inclusion in our bibliography. Relevant publications should be sent to: GTMBA Project Office, NOAA/Pacific Marine Environmental Laboratory, 7600 Sand Point Way NE, Seattle, WA 98115" ; :Data_Source = "Global Tropical Moored Buoy Array Project Office/NOAA/PMEL" ; :File_info = "Contact: Dai.C.McClurg@noaa.gov" ; :missing_value = 1.e+35f ; :_FillValue = 1.e+35f ; :CREATION_DATE = "13:05 28-JUL-2017" ; :_Format = "classic" ; ``` Problem descriptionIn this case the I'm not sure that this is standards-compliant but is this something we could support? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);