issues
10 rows where comments = 1, type = "issue" and user = 35968931 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2019566184 | I_kwDOAMm_X854YCJo | 8494 | Filter expected warnings in the test suite | TomNicholas 35968931 | closed | 0 | 1 | 2023-11-30T21:50:15Z | 2024-04-29T16:57:07Z | 2024-04-29T16:56:16Z | MEMBER | FWIW one thing I'd be keen for to do generally — though maybe this isn't the place to start it — is handle warnings in the test suite when we add a new warning — i.e. filter them out where we expect them. In this case, that would be the loading the netCDF files that have duplicate dims. Otherwise warnings become a huge block of text without much salience. I mostly see the 350 lines of them and think "meh mostly units & cftime", but then something breaks on a new upstream release that was buried in there, or we have a supported code path that is raising warnings internally. (I'm not sure whether it's possible to generally enforce that — maybe we could raise on any warnings coming from within xarray? Would be a non-trivial project to get us there though...) Originally posted by @max-sixty in https://github.com/pydata/xarray/issues/8491#issuecomment-1834615826 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8494/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2204768593 | I_kwDOAMm_X86DahlR | 8871 | Concatenation automatically creates indexes where none existed | TomNicholas 35968931 | open | 0 | 1 | 2024-03-25T02:43:31Z | 2024-04-27T16:50:56Z | MEMBER | What happened?Currently concatenation will automatically create indexes for any dimension coordinates in the output, even if there were no indexes on the input. What did you expect to happen?Indexes not to be created for variables which did not already have them. Minimal Complete Verifiable Example```Python TODO once passing indexes={} directly to DataArray constructor is allowed then no need to create coords object separately firstcoords = Coordinates( {"x": np.array([1, 2, 3])}, indexes={} ) arrays = [ DataArray( np.zeros((3, 3)), dims=["x", "y"], coords=coords, ) for _ in range(2) ] combined = concat(arrays, dim="x") assert combined.shape == (6, 3) assert combined.dims == ("x", "y") should not have auto-created any indexesassert combined.indexes == {} # this fails combined = concat(arrays, dim="z") assert combined.shape == (2, 3, 3) assert combined.dims == ("z", "x", "y") should not have auto-created any indexesassert combined.indexes == {} # this also fails ``` MVCE confirmation
Relevant log output```Python nor have auto-created any indexes
Anything else we need to know?The culprit is the call to I would like know to how to avoid the internal call to Conceptually, I would have thought we should be examining what indexes exist on the objects to be concatenated, and not creating new indexes for any variable that doesn't already have one. Presumably we should therefore be making use of the EnvironmentI've been experimenting running this test on a branch that includes both #8711 and #8714, but actually this example will fail in the same way on |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2259850888 | I_kwDOAMm_X86GspaI | 8966 | HTML repr for chunked variables with high dimensionality | TomNicholas 35968931 | open | 0 | 1 | 2024-04-23T22:00:40Z | 2024-04-24T13:27:05Z | MEMBER | What is your issue?The graphical representation of dask arrays with many dimensions can end up off the page in the HTML repr. Ideally dask would worry about this for us, and we just use their |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2099530269 | I_kwDOAMm_X859JEod | 8665 | Error when broadcasting array API compliant class | TomNicholas 35968931 | closed | 0 | 1 | 2024-01-25T04:11:14Z | 2024-01-26T16:41:31Z | 2024-01-26T16:41:31Z | MEMBER | What happened?Broadcasting fails for array types that strictly follow the array API standard. What did you expect to happen?With a normal numpy array this obviously works fine. Minimal Complete Verifiable Example```Python import numpy.array_api as nxp arr = nxp.asarray([[1, 2, 3], [4, 5, 6]], dtype=np.dtype('float32')) var = xr.Variable(data=arr, dims=['x', 'y']) var.isel(x=0) # this is fine var * var.isel(x=0) # this is not IndexError Traceback (most recent call last) Cell In[31], line 1 ----> 1 var * var.isel(x=0) File ~/Documents/Work/Code/xarray/xarray/core/_typed_ops.py:487, in VariableOpsMixin.mul(self, other) 486 def mul(self, other: VarCompatible) -> Self | T_DataArray: --> 487 return self._binary_op(other, operator.mul) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2406, in Variable._binary_op(self, other, f, reflexive) 2404 other_data, self_data, dims = _broadcast_compat_data(other, self) 2405 else: -> 2406 self_data, other_data, dims = _broadcast_compat_data(self, other) 2407 keep_attrs = _get_keep_attrs(default=False) 2408 attrs = self._attrs if keep_attrs else None File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2922, in _broadcast_compat_data(self, other)
2919 def _broadcast_compat_data(self, other):
2920 if all(hasattr(other, attr) for attr in ["dims", "data", "shape", "encoding"]):
2921 # File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2899, in _broadcast_compat_variables(*variables) 2893 """Create broadcast compatible variables, with the same dimensions. 2894 2895 Unlike the result of broadcast_variables(), some variables may have 2896 dimensions of size 1 instead of the size of the broadcast dimension. 2897 """ 2898 dims = tuple(_unified_dims(variables)) -> 2899 return tuple(var.set_dims(dims) if var.dims != dims else var for var in variables) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2899, in <genexpr>(.0) 2893 """Create broadcast compatible variables, with the same dimensions. 2894 2895 Unlike the result of broadcast_variables(), some variables may have 2896 dimensions of size 1 instead of the size of the broadcast dimension. 2897 """ 2898 dims = tuple(_unified_dims(variables)) -> 2899 return tuple(var.set_dims(dims) if var.dims != dims else var for var in variables) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1479, in Variable.set_dims(self, dims, shape) 1477 expanded_data = duck_array_ops.broadcast_to(self.data, tmp_shape) 1478 else: -> 1479 expanded_data = self.data[(None,) * (len(expanded_dims) - self.ndim)] 1481 expanded_var = Variable( 1482 expanded_dims, expanded_data, self._attrs, self._encoding, fastpath=True 1483 ) 1484 return expanded_var.transpose(*dims) File ~/miniconda3/envs/dev3.11/lib/python3.12/site-packages/numpy/array_api/_array_object.py:555, in Array.getitem(self, key) 550 """ 551 Performs the operation getitem. 552 """ 553 # Note: Only indices required by the spec are allowed. See the 554 # docstring of _validate_index --> 555 self._validate_index(key) 556 if isinstance(key, Array): 557 # Indexing self._array with array_api arrays can be erroneous 558 key = key._array File ~/miniconda3/envs/dev3.11/lib/python3.12/site-packages/numpy/array_api/_array_object.py:348, in Array._validate_index(self, key) 344 elif n_ellipsis == 0: 345 # Note boolean masks must be the sole index, which we check for 346 # later on. 347 if not key_has_mask and n_single_axes < self.ndim: --> 348 raise IndexError( 349 f"{self.ndim=}, but the multi-axes index only specifies " 350 f"{n_single_axes} dimensions. If this was intentional, " 351 "add a trailing ellipsis (...) which expands into as many " 352 "slices (:) as necessary - this is what np.ndarray arrays " 353 "implicitly do, but such flat indexing behaviour is not " 354 "specified in the Array API." 355 ) 357 if n_ellipsis == 0: 358 indexed_shape = self.shape IndexError: self.ndim=1, but the multi-axes index only specifies 0 dimensions. If this was intentional, add a trailing ellipsis (...) which expands into as many slices (:) as necessary - this is what np.ndarray arrays implicitly do, but such flat indexing behaviour is not specified in the Array API. ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environmentmain branch of xarray, numpy 1.26.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1801849622 | I_kwDOAMm_X85rZgsW | 7982 | Use Meilisearch in our docs | TomNicholas 35968931 | closed | 0 | 1 | 2023-07-12T22:29:45Z | 2023-07-19T19:49:53Z | 2023-07-19T19:49:53Z | MEMBER | Is your feature request related to a problem?Just saw this cool search thing for sphinx in a lightning talk at SciPy called Meilisearch Cc @dcherian Describe the solution you'd likeRead about it here https://sphinxdocs.ansys.com/version/stable/user_guide/options.html Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1468534020 | I_kwDOAMm_X85XiA0E | 7333 | FacetGrid with coords error | TomNicholas 35968931 | open | 0 | 1 | 2022-11-29T18:42:48Z | 2023-04-03T10:12:40Z | MEMBER | There may perhaps be a small bug anyway, as DataArrays with and without coords are handled differently. Contrast: ``` da=xr.DataArray(data=np.random.randn(2,2,2,10,10),coords={'A':['a1','a2'],'B':[0,1],'C':[0,1],'X':range(10),'Y':range(10)}) p=da.sel(A='a1').plot.contour(col='B',row='C')
try:
p.map_dataarray(xr.plot.pcolormesh, y="B", x="C");
except Exception as e:
print('An uninformative error:')
print(e)
``` with: ``` da=xr.DataArray(data=np.random.randn(2,2,2,10,10)) p=da.sel(dim_0=0).plot.contour(col='dim_1',row='dim_2') try: p.map_dataarray(xr.plot.pcolormesh, y="dim_1", x="dim_2"); except Exception as e: print('A more informative error:') print(e) ``` ``` A more informative error: x must be one of None, 'dim_3', 'dim_4' ``` Originally posted by @joshdorrington in https://github.com/pydata/xarray/discussions/7310#discussioncomment-4257643 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1230247677 | I_kwDOAMm_X85JVBb9 | 6585 | Add example of apply_ufunc + dask.array.map_blocks to docs? | TomNicholas 35968931 | open | 0 | 1 | 2022-05-09T21:02:43Z | 2022-05-09T21:10:23Z | MEMBER | What is your issue?A pattern I use fairly often is AFAIK this currently isn't discussed anywhere in the docs. A sensible place to add a recipe explaining this would be just after this section in your notebook @dcherian ? @rabernat @jbusecke this is the pattern we used in xGCM FYI |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
939072049 | MDU6SXNzdWU5MzkwNzIwNDk= | 5587 | Tolerance argument for `da.isin()`? | TomNicholas 35968931 | open | 0 | 1 | 2021-07-07T16:39:42Z | 2021-10-13T06:28:11Z | MEMBER | Is your feature request related to a problem? Please describe. Sometimes you want to check that data values are present in another array, but only up to a certain tolerance. Describe the solution you'd like
Not sure what the implementation should be but there are two vectorized suggestions here. Describe alternatives you've considered
Different to Additional context @jbusecke requested it. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5587/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
604218952 | MDU6SXNzdWU2MDQyMTg5NTI= | 3992 | DataArray.integrate has a 'dim' arg, but Dataset.integrate has a 'coord' arg | TomNicholas 35968931 | closed | 0 | 1 | 2020-04-21T19:12:03Z | 2021-01-29T22:59:30Z | 2021-01-29T22:59:30Z | MEMBER | This is just a minor gripe but I think it should be fixed. The API syntax is inconsistent:
The discussion on the original PR seems to agree, so I think this was just an small oversight. The only question is whether it requires a deprecation cycle? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
409854736 | MDU6SXNzdWU0MDk4NTQ3MzY= | 2768 | [Bug] Reduce fails when no axis given | TomNicholas 35968931 | closed | 0 | 1 | 2019-02-13T15:16:45Z | 2019-02-19T06:13:00Z | 2019-02-19T06:12:59Z | MEMBER |
```python import numpy as np from xarray import DataArray da = DataArray(np.array([[1, 3, 3], [2, 1, 5]])) def total_sum(data): return np.sum(data.flatten()) sum = da.reduce(total_sum) print(sum) ``` This should print a dataarray with just the number 15 in it, but instead it throws the error
This contradicts what the docstring of The problem is that in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);