issues
12 rows where state = "closed" and user = 81219 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1461935127 | I_kwDOAMm_X85XI1wX | 7314 | Scatter plot infers weird default values | huard 81219 | closed | 0 | 2 | 2022-11-23T15:10:41Z | 2023-02-11T20:55:17Z | 2023-02-11T20:55:17Z | CONTRIBUTOR | What happened?The The issue seems to be related to default values for the size and hue of markers. Instead of using What did you expect to happen?A scatter plot with default size and color for markers.
Now xarray has inferred that the size is somehow related to the Note that the calculations required to draw the figure with those defaults take a huge amount of time. In the example below, I've subsetted the file so the code runs in a short time. Without subsetting, it runs forever. Minimal Complete Verifiable Example
MVCE confirmation
Relevant log outputNo response Anything else we need to know?Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-131-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_CA.UTF-8
LOCALE: ('en_CA', 'UTF-8')
libhdf5: 1.10.4
libnetcdf: 4.7.3
xarray: 2022.10.0
pandas: 1.4.3
numpy: 1.21.4
scipy: None
netCDF4: 1.5.7
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.5.1.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.3.5
dask: 2022.7.0
distributed: None
matplotlib: 3.6.2
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.5.0
cupy: None
pint: 0.20.1
sparse: None
flox: None
numpy_groupies: None
setuptools: 62.6.0
pip: 22.0.4
conda: None
pytest: 7.1.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
937160239 | MDU6SXNzdWU5MzcxNjAyMzk= | 5576 | Slicing bug with pandas 1.3 and CFTimeIndex | huard 81219 | closed | 0 | 2 | 2021-07-05T14:44:17Z | 2021-07-05T16:50:30Z | 2021-07-05T16:50:30Z | CONTRIBUTOR | What happened: Slicing into a DataArray along time with a CFTimeIndex fails since upgrade to pandas 1.3 What you expected to happen: The usual. Minimal Complete Verifiable Example:
```pythonTypeError Traceback (most recent call last) <ipython-input-5-3afe7d577940> in <module> ----> 1 ref.sel(time=slice(None, "2015-01-01")) ~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 1269 Dimensions without coordinates: points 1270 """ -> 1271 ds = self._to_temp_dataset().sel( 1272 indexers=indexers, 1273 drop=drop, ~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2363 """ 2364 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") -> 2365 pos_indexers, new_indexes = remap_label_indexers( 2366 self, indexers=indexers, method=method, tolerance=tolerance 2367 ) ~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 419 } 420 --> 421 pos_indexers, new_indexes = indexing.remap_label_indexers( 422 obj, v_indexers, method=method, tolerance=tolerance 423 ) ~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 272 coords_dtype = data_obj.coords[dim].dtype 273 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 274 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 275 pos_indexers[dim] = idxr 276 if new_idx is not None: ~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance)
119 "cannot use ~/.conda/envs/xclim/lib/python3.8/site-packages/pandas/core/indexes/base.py in slice_indexer(self, start, end, step, kind) 5684 slice(1, 3, None) 5685 """ -> 5686 start_slice, end_slice = self.slice_locs(start, end, step=step) 5687 5688 # return a slice ~/.conda/envs/xclim/lib/python3.8/site-packages/pandas/core/indexes/base.py in slice_locs(self, start, end, step, kind) 5892 end_slice = None 5893 if end is not None: -> 5894 end_slice = self.get_slice_bound(end, "right") 5895 if end_slice is None: 5896 end_slice = len(self) ~/.conda/envs/xclim/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind) 5796 # For datetime indices label may be a string that has to be converted 5797 # to datetime boundary according to its resolution. -> 5798 label = self._maybe_cast_slice_bound(label, side) 5799 5800 # we need to look up the label TypeError: _maybe_cast_slice_bound() missing 1 required positional argument: 'kind' ``` Anything else we need to know?:
A quick diagnostic suggests that Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Jan 25 2021, 23:21:18) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-77-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: ('en_CA', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.18.2 pandas: 1.3.0 numpy: 1.20.0 scipy: 1.6.3 netCDF4: 1.5.5.1 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.4.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.01.1 distributed: 2021.01.1 matplotlib: 3.4.2 cartopy: None seaborn: None numbagg: None pint: 0.16.1 setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: 6.2.2 IPython: 7.20.0 sphinx: 4.0.2 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
741100024 | MDExOlB1bGxSZXF1ZXN0NTE5NDc0NzQz | 4573 | Update xESMF link to pangeo-xesmf in related-projects | huard 81219 | closed | 0 | 1 | 2020-11-11T22:00:34Z | 2020-11-12T14:54:08Z | 2020-11-12T14:53:56Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4573 | The new link is where development now occurs. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
369639339 | MDU6SXNzdWUzNjk2MzkzMzk= | 2481 | Implement CFPeriodIndex | huard 81219 | closed | 0 | 3 | 2018-10-12T17:20:04Z | 2020-11-02T01:26:48Z | 2020-11-02T01:26:48Z | CONTRIBUTOR | A CFPeriodIndex supporting non-standard calendars would be useful to facilitate climate analyses. The use case for me would be to find the start and end date of a resampling group. This is useful to spot missing values in a resampled time series, or to create ``` import xarray as xr import pandas as pd cftime = xr.cftime_range(start='2000-01-01', periods=361, freq='D', calendar='360_day') pdtime = pd.date_range(start='2000-01-01', periods=361, freq='D') cf_da = xr.DataArray(range(361), coords={'time': cftime}, dims='time') pd_da = xr.DataArray(range(361), coords={'time': pdtime}, dims='time') cf_c = cf_da.resample(time='M').count()pd_c = pd_da.resample(time='M').count() cf_p = cf_c.indexes['time'].to_period()pd_p = pd_c.indexes['time'].to_period() cf_expected_days_in_group = cf_p.end_time - cf_p.start_time + pd.offsets.Day(1)pd_expected_days_in_group = pd_p.end_time - pd_p.start_time + pd.offsets.Day(1) ``` Depends on #2191 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
492966281 | MDU6SXNzdWU0OTI5NjYyODE= | 3304 | DataArray.quantile does not honor `keep_attrs` | huard 81219 | closed | 0 | 3 | 2019-09-12T18:39:47Z | 2020-04-05T18:56:30Z | 2019-09-15T22:16:15Z | CONTRIBUTOR | MCVE Code Sample```python Your code hereimport xarray as xr Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
561210241 | MDExOlB1bGxSZXF1ZXN0MzcyMDYyNTM2 | 3758 | Fix interp bug when indexer shares coordinates with array | huard 81219 | closed | 0 | 4 | 2020-02-06T19:06:22Z | 2020-03-13T13:58:38Z | 2020-03-13T13:58:38Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3758 |
Replaces #3262 (I think). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
538620718 | MDExOlB1bGxSZXF1ZXN0MzUzNzM1MDM4 | 3631 | Add support for CFTimeIndex in get_clean_interp_index | huard 81219 | closed | 0 | 11 | 2019-12-16T19:57:24Z | 2020-01-26T18:36:24Z | 2020-01-26T14:10:37Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3631 |
Related to #3349 As suggested by @spencerkclark, index values are computed as a delta with respect to 1970-01-01. At the moment, this fails if dates fall outside of the range for nanoseconds timedeltas [ 1678 AD, 2262 AD]. Is this something we can fix ? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
539821504 | MDExOlB1bGxSZXF1ZXN0MzU0NzMwNzI5 | 3642 | Make datetime_to_numeric more robust to overflow errors | huard 81219 | closed | 0 | 1 | 2019-12-18T17:34:41Z | 2020-01-20T19:21:49Z | 2020-01-20T19:21:49Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3642 |
This is likely only safe with NumPy>=1.17 though. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
492987154 | MDExOlB1bGxSZXF1ZXN0MzE3MDU0MjUz | 3305 | Honor `keep_attrs` in DataArray.quantile | huard 81219 | closed | 0 | 1 | 2019-09-12T19:27:14Z | 2019-09-15T22:16:27Z | 2019-09-15T22:16:15Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3305 |
Note that I've set the default to True (if keep_attrs is None). This sounded reasonable since quantiles share the same units and properties as the original array, but I can switch it to False if that's the usual default. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3305/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
455262061 | MDU6SXNzdWU0NTUyNjIwNjE= | 3018 | Add quantile method to groupby object | huard 81219 | closed | 0 | 4 | 2019-06-12T14:54:35Z | 2019-07-02T16:23:57Z | 2019-06-24T15:21:29Z | CONTRIBUTOR | Dataset and DataArray objects have a quantile method, but not GroupBy. This would be useful for climatological analyses. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3018/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
461088361 | MDU6SXNzdWU0NjEwODgzNjE= | 3047 | Assign attributes to DataArrays when creating dataset with PydapDataStore + subsetting | huard 81219 | closed | 0 | 2 | 2019-06-26T17:14:10Z | 2019-06-27T12:02:34Z | 2019-06-27T12:02:33Z | CONTRIBUTOR | MCVE Code Sample```python import xarray as xr PyDAP access without subsetting - everything's fineurl = 'http://remotetest.unidata.ucar.edu/thredds/dodsC/testdods/coads_climatology.nc' ds = xr.open_dataset(url, engine='pydap', decode_times=False) ds.TIME.units # yields 'hour since 0000-01-01 00:00:00' PyDAP access with subsetting - variable attributes are global...dss = xr.open_dataset(url+'?SST[0:1:11][0:1:0][0:1:0]', engine='pydap', decode_times=False) print(dss.SST.attrs) # all good so far print(dss.TIME.attrs) # oh oh... nothing print(dss.attrs['TIME.units']) ``` Problem DescriptionOpening a subsetted dataset with PydapDataStore creates global attributes instead of variable attributes. Expected OutputAll the global Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
423405197 | MDExOlB1bGxSZXF1ZXN0MjYyOTgzOTcz | 2828 | Add quantile method to GroupBy | huard 81219 | closed | 0 | 6 | 2019-03-20T18:20:41Z | 2019-06-24T15:21:36Z | 2019-06-24T15:21:29Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2828 |
Fixes #3018 Note that I've added an unrelated test that exposes an issue with grouping when there is only one element per group. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2828/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);