issues
9 rows where comments = 2, repo = 13221727 and user = 500246 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
741806260 | MDU6SXNzdWU3NDE4MDYyNjA= | 4579 | Invisible differences between arrays using IntervalIndex | gerritholl 500246 | open | 0 | 2 | 2020-11-12T17:54:55Z | 2022-10-03T15:09:25Z | CONTRIBUTOR | What happened: I have two What you expected to happen: I expect two arrays that appear identical to behave identically. If they don't behave identically then there should be some way to tell the difference (apart from Minimal Complete Verifiable Example: ```python import xarray import pandas da1 = xarray.DataArray([0, 1, 2], dims=("x",), coords={"x": pandas.interval_range(0, 2, 3)}) da2 = xarray.DataArray([0, 1, 2], dims=("x",), coords={"x": pandas.interval_range(0, 2, 3).to_numpy()}) print(repr(da1) == repr(da2)) print(repr(da1.x) == repr(da2.x)) print(da1.x.dtype == da2.x.dtype) identical? No:print(da1.equals(da2)) print(da1.x.equals(da2.x)) in particular:da1.sel(x=1) # works da2.sel(x=1) # fails ``` Results in: ``` True True True False False Traceback (most recent call last): File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2895, in get_loc return self._engine.get_loc(casted_key) File "pandas/_libs/index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 101, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 1675, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 1683, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 1 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "mwe105.py", line 19, in <module> da2.sel(x=1) # fails File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/xarray/core/dataarray.py", line 1143, in sel ds = self._to_temp_dataset().sel( File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/xarray/core/dataset.py", line 2105, in sel pos_indexers, new_indexes = remap_label_indexers( File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/xarray/core/coordinates.py", line 397, in remap_label_indexers pos_indexers, new_indexes = indexing.remap_label_indexers( File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/xarray/core/indexing.py", line 275, in remap_label_indexers idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/xarray/core/indexing.py", line 196, in convert_label_indexer indexer = index.get_loc(label_value, method=method, tolerance=tolerance) File "/data/gholl/miniconda3/envs/py38/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc raise KeyError(key) from err KeyError: 1 ``` Additional context I suppose this happens because under the hood xarray does something clever to support pandas-style indexing even though the coordinate variable appears like a numpy array with an object dtype, and that this cleverness is lost if the object is already converted to a numpy array. But there is, as far as I can see, no way to tell the difference once the objects have been created. Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 19:08:05) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 4.12.14-lp150.12.82-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.1 pandas: 1.1.4 numpy: 1.19.4 scipy: 1.5.3 netCDF4: 1.5.4 pydap: None h5netcdf: 0.8.1 h5py: 3.1.0 Nio: None zarr: 2.5.0 cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.7 cfgrib: None iris: None bottleneck: None dask: 2.30.0 distributed: 2.30.1 matplotlib: 3.3.2 cartopy: 0.18.0 seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20201009 pip: 20.2.4 conda: installed pytest: 6.1.2 IPython: 7.19.0 sphinx: 3.3.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
686461572 | MDU6SXNzdWU2ODY0NjE1NzI= | 4378 | Plotting when Interval coordinate is timedelta-based | gerritholl 500246 | open | 0 | 2 | 2020-08-26T16:36:27Z | 2022-04-18T21:55:15Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. The xarray plotting interface supports coordinates containing ```python import numpy as np import pandas as pd import xarray as xr da = xr.DataArray( np.arange(10), dims=("x",), coords={"x": [pd.Interval(i, i+1) for i in range(10)]}) da.plot() # works da = xr.DataArray( np.arange(10), dims=("x",), coords={"x": [pd.Interval( d-pd.Timestamp("2000-01-01"), d-pd.Timestamp("2000-01-01")+pd.Timedelta("1H")) for d in pd.date_range("2000-01-01", "2000-01-02", 10)]}) da.plot() # fails ``` The latter fails with:
This error message is somewhat confusing, because the coordinates are "dates of type (...) pd.Interval", but perhaps a timedelta is not considered a date. Describe the solution you'd like I would like that I can use the xarray plotting interface for any pandas.Interval coordinate, including Describe alternatives you've considered I'll "manually" calculate the midpoints and use those as a timedelta coordinate instead. Additional context It seems that regular timedeltas aren't really supported either, although they don't cause an error message, they rather produce incorrect results. There's probably a related issue somewhere, but I can't find it now. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
618985094 | MDU6SXNzdWU2MTg5ODUwOTQ= | 4065 | keep_attrs not respected for unary operators | gerritholl 500246 | closed | 0 | 2 | 2020-05-15T13:55:14Z | 2020-10-14T16:29:51Z | 2020-10-14T16:29:51Z | CONTRIBUTOR | The xarray global option MCVE Code Sample
Expected OutputI expect
Problem DescriptionI get:
I get the same for the other unary operators VersionsTested with latest xarray master (see below for details). Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.2 | packaged by conda-forge | (default, Mar 23 2020, 18:16:37) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.12.14-lp150.12.82-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.4 xarray: 0.15.2.dev64+g2542a63f pandas: 1.0.3 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.4.0 cftime: 1.1.1.2 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.3 cfgrib: None iris: None bottleneck: None dask: 2.14.0 distributed: 2.14.0 matplotlib: 3.2.1 cartopy: 0.17.0 seaborn: None numbagg: None pint: None setuptools: 46.1.3.post20200325 pip: 20.0.2 conda: installed pytest: 5.4.1 IPython: 7.13.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4065/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
528154893 | MDU6SXNzdWU1MjgxNTQ4OTM= | 3572 | Context manager `AttributeError` when engine='h5netcdf' | gerritholl 500246 | closed | 0 | 2 | 2019-11-25T15:19:29Z | 2019-11-25T16:12:37Z | 2019-11-25T16:12:37Z | CONTRIBUTOR | Opening this NetCDF file works fine with the default engine, but fails with AttributeError with the h5netcdf engine: MCVE Code SampleData available from EUMETSAT: https://www.eumetsat.int/website/home/Satellites/FutureSatellites/MeteosatThirdGeneration/MTGData/MTGUserTestData/index.html --> ftp://ftp.eumetsat.int/pub/OPS/out/test-data/Test-data-for-External-Users/MTG_FCI_Test-Data/ --> uncompressed ```python import xarray f = "/path/to/.../W_XX-EUMETSAT-Darmstadt,IMG+SAT,MTI1+FCI-1C-RRAD-FDHSI-FD--CHK-BODY--L2P-NC4E_C_EUMT_20170410114434_GTT_DEV_20170410113925_20170410113934_N__C_0070_0067.nc" ds = xarray.open_dataset(f, engine="h5netcdf") ``` Expected OutputNo output at all. Problem DescriptionResults in
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
246093122 | MDU6SXNzdWUyNDYwOTMxMjI= | 1494 | AssertionError when storing datetime coordinates of wrong units | gerritholl 500246 | closed | 0 | 2 | 2017-07-27T16:11:48Z | 2019-06-30T04:28:18Z | 2019-06-30T04:28:17Z | CONTRIBUTOR | The following code should probably fail somewhere else than with an ``` $ cat mwe.py !/usr/bin/env python3.6import numpy import xarray x = xarray.DataArray( [1, 2, 3], dims=["X"], coords={"X": numpy.zeros(shape=3, dtype="M8[ms]")}) x.to_netcdf("/tmp/test.nc") $ python3.6 mwe.py Traceback (most recent call last): File "mwe.py", line 11, in <module> x.to_netcdf("/tmp/test.nc") File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/core/dataarray.py", line 1351, in to_netcdf dataset.to_netcdf(args, *kwargs) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/core/dataset.py", line 977, in to_netcdf unlimited_dims=unlimited_dims) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/backends/api.py", line 573, in to_netcdf unlimited_dims=unlimited_dims) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/core/dataset.py", line 916, in dump_to_store unlimited_dims=unlimited_dims) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/backends/common.py", line 244, in store cf_variables, cf_attrs = cf_encoder(variables, attributes) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/conventions.py", line 1089, in cf_encoder for k, v in iteritems(variables)) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/conventions.py", line 1089, in <genexpr> for k, v in iteritems(variables)) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/conventions.py", line 734, in encode_cf_variable var = maybe_encode_datetime(var) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/conventions.py", line 585, in maybe_encode_datetime data, encoding.pop('units', None), encoding.pop('calendar', None)) File "/dev/shm/gerrit/venv/stable-3.6/lib/python3.6/site-packages/xarray/conventions.py", line 293, in encode_cf_datetime assert dates.dtype == 'datetime64[ns]' AssertionError ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1494/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
381633612 | MDExOlB1bGxSZXF1ZXN0MjMxNTU2ODM5 | 2557 | add missing comma and article in error message | gerritholl 500246 | closed | 0 | 2 | 2018-11-16T14:59:02Z | 2018-11-16T16:40:03Z | 2018-11-16T16:40:03Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2557 | Add missing comma and article in error message when attribute values have the wrong type. I tihnk this change is sufficiently minor that no documentation or whatsnew changes should be necessary. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2557/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
289790965 | MDU6SXNzdWUyODk3OTA5NjU= | 1838 | DataArray.sum does not respect dtype keyword | gerritholl 500246 | closed | 0 | 2 | 2018-01-18T22:01:07Z | 2018-01-20T18:29:02Z | 2018-01-20T18:29:02Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python Your code hereda = xarray.DataArray(arange(5, dtype="i2")) print(da.sum(dtype="i4").dtype) ``` Problem descriptionThe result is int64. This is a problem because I asked for int32. Expected OutputExpected output Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
212501628 | MDU6SXNzdWUyMTI1MDE2Mjg= | 1300 | git version label yields version in violation of PEP 440 | gerritholl 500246 | closed | 0 | 2 | 2017-03-07T17:23:00Z | 2017-12-15T07:26:24Z | 2017-12-15T07:26:24Z | CONTRIBUTOR | When an This violates PEP 440, which leads to multiple problems:
Instead, the version number above should be written as |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
228023777 | MDU6SXNzdWUyMjgwMjM3Nzc= | 1405 | Using uint64 for Dataset indexing gives ValueError | gerritholl 500246 | closed | 0 | 2 | 2017-05-11T15:05:20Z | 2017-10-23T07:50:29Z | 2017-10-23T07:50:29Z | CONTRIBUTOR | Trying to index a ``` In [13]: import xarray In [14]: ds = xarray.Dataset({"A": (("x", "y"), arange(5*6).reshape(5,6))}) In [15]: ds[{"x": numpy.array([0], dtype="int64")}] Out[15]: <xarray.Dataset> Dimensions: (x: 1, y: 6) Dimensions without coordinates: x, y Data variables: A (x, y) int64 0 1 2 3 4 5 In [16]: ds[{"x": numpy.array([0], dtype="uint64")}]ValueError Traceback (most recent call last) <ipython-input-16-4cf23af0967e> in <module>() ----> 1 ds[{"x": numpy.array([0], dtype="uint64")}] /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/dataset.py in getitem(self, key) 722 """ 723 if utils.is_dict_like(key): --> 724 return self.isel(**key) 725 726 if hashable(key): /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/dataset.py in isel(self, drop, indexers) 1147 for name, var in iteritems(self._variables): 1148 var_indexers = dict((k, v) for k, v in indexers if k in var.dims) -> 1149 new_var = var.isel(var_indexers) 1150 if not (drop and name in var_indexers): 1151 variables[name] = new_var /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/variable.py in isel(self, **indexers) 547 if dim in indexers: 548 key[i] = indexers[dim] --> 549 return self[tuple(key)] 550 551 def squeeze(self, dim=None): /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/variable.py in getitem(self, key) 377 dims = tuple(dim for k, dim in zip(key, self.dims) 378 if not isinstance(k, integer_types)) --> 379 values = self._indexable_data[key] 380 # orthogonal indexing should ensure the dimensionality is consistent 381 if hasattr(values, 'ndim'): /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/indexing.py in getitem(self, key) 467 468 def getitem(self, key): --> 469 key = self._convert_key(key) 470 return self._ensure_ndarray(self.array[key]) 471 /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/indexing.py in _convert_key(self, key) 454 if any(not isinstance(k, integer_types + (slice,)) for k in key): 455 # key would trigger fancy indexing --> 456 key = orthogonal_indexer(key, self.shape) 457 return key 458 /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/indexing.py in orthogonal_indexer(key, shape) 78 """ 79 # replace Ellipsis objects with slices ---> 80 key = list(canonicalize_indexer(key, len(shape))) 81 # replace 1d arrays and slices with broadcast compatible arrays 82 # note: we treat integers separately (instead of turning them into 1d /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/indexing.py in canonicalize_indexer(key, ndim) 66 return indexer 67 ---> 68 return tuple(canonicalize(k) for k in expanded_indexer(key, ndim)) 69 70 /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/indexing.py in <genexpr>(.0) 66 return indexer 67 ---> 68 return tuple(canonicalize(k) for k in expanded_indexer(key, ndim)) 69 70 /dev/shm/gerrit/venv/stable-3.5/lib/python3.5/site-packages/xarray/core/indexing.py in canonicalize(indexer) 63 'array indexing; all subkeys must be ' 64 'slices, integers or sequences of ' ---> 65 'integers or Booleans' % indexer) 66 return indexer 67 ValueError: invalid subkey array([0], dtype=uint64) for integer based array indexing; all subkeys must be slices, integers or sequences of integers or Booleans ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);