issues
4 rows where user = 20617032 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
485508509 | MDU6SXNzdWU0ODU1MDg1MDk= | 3267 | Resample excecution time is significantly longer in version 0.12 than 0.11 | aspitarl 20617032 | closed | 0 | 5 | 2019-08-26T23:51:11Z | 2023-04-29T03:30:20Z | 2023-04-29T03:30:20Z | NONE | MCVE Code Sample```python import numpy as np import xarray as xr import pandas as pd import time size = 1000000 data = np.random.random(size) times = pd.date_range('2019-01-01', periods=size, freq='ms') da = xr.DataArray(data, dims=['time'], coords={'time': times}) start = time.time() da.resample(time='s').mean() print('Elapsed time: ' + str(time.time() - start)) print('xarray version: ' + str(xr.version)) ```
Elapsed time: 0.2671010494232178 xarray version: 0.11.3
Elapsed time: 6.652455568313599 xarray version: 0.12.0 Expected OutputI expect that for the default parameters, resample should take a similar amount of time as previous versions. Or at least the documentation should specify what parameters must be passed into resample or mean to achieve the same time as in version 0.11.3 Problem Descriptionresampling a dataarray or dataset and then calling mean takes significantly longer in the latest versions of xarray 0.12. The changelog specifies multiple changes to the resample method and new parameters, but its not clear what one would specify to achieve execution time improvements. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
619327957 | MDExOlB1bGxSZXF1ZXN0NDE4ODc5MTU3 | 4067 | make text wrap width an argument in label_from_attrs | aspitarl 20617032 | closed | 0 | 4 | 2020-05-15T23:44:09Z | 2023-03-26T20:06:29Z | 2023-03-26T20:06:28Z | NONE | 0 | pydata/xarray/pulls/4067 | label_from_attrs used textwrap.wrap with a default wrap width of 30, this commit changes label_from_attrs to instead take an argument wrap_width that specifies the wrap width. context: I find the label_from_attrs function useful for plots that I make using pyplot, but needed to change the text wrap with. I am new to contributing to projects like this so bear with me. I only had the mypy and flake8 linters and figured that was good enough for a minor change like this. And I didn't see label_from_attrs in the api document so just added a line to whats-new.rst
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1352621981 | I_kwDOAMm_X85Qn1-d | 6959 | Assigning coordinate level to MultiIndex fails if MultiIndex only has one level | aspitarl 20617032 | closed | 0 | benbovy 4160723 | 0 | 2022-08-26T18:48:18Z | 2022-09-27T10:35:39Z | 2022-09-27T10:35:39Z | NONE | What happened?This issue originates from this discussion where I was trying to figure out the best way to replace coordinate values in a MultiIndex level. I found that removing the level with What did you expect to happen?I expect that removing and replacing a coordinate level would work the same independent of the number of levels in the MultiIndex. Minimal Complete Verifiable Example```Python import numpy as np import pandas as pd import xarray as xr Replace the coordinates in level 'one'. This works as expected.midx = pd.MultiIndex.from_product([[0,1,2], [3, 4], [5,6]], names=("one", "two","three")) mda = xr.DataArray(np.random.rand(12, 3), [("x", midx), ("y", range(3))]) new_coords = mda.coords['one'].values*2 mda.reset_index('one', drop=True).assign_coords(one= ('x',new_coords)).set_index(x='one',append=True) #Works Drop the two level before had such that the intermediate state has a multindexwith only the 'three' level, this throws a ValueErrormda.reset_index('two',drop=True).reset_index('one', drop=True).assign_coords(one= ('x',new_coords)) #ValueError We can intialize a data array with only two levels and only drop the 'one'level, which gives the same ValueError. This shows that the problem is notdue to something with dropping the 'two' level above, but something inherentto dropping to a state with only one multinddex levelmidx = pd.MultiIndex.from_product([[0,1,2], [3, 4]], names=("one", "two")) mda = xr.DataArray(np.random.rand(6, 2), [("x", midx), ("y", range(2))]) new_coords = mda.coords['one'].values*2 mda.reset_index('one', drop=True).assign_coords(one= ('x',new_coords)).set_index(x='one',append=True) #ValueError ``` MVCE confirmation
Relevant log output```Python First example, starting from 3 level multiindex and dropping two levelsValueError Traceback (most recent call last) c:\Users\aspit\Git\Learn\xarray\replace_coord_issue.py in <module> 15 # Drop the two level before had such that the intermediate state has a multindex 16 # with only the 'three' level, this throws a ValueError ---> 17 mda.reset_index('two',drop=True).reset_index('one', drop=True).assign_coords(one= ('x',new_coords)) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\common.py in assign_coords(self, coords, **coords_kwargs) 590 data = self.copy(deep=False) 591 results: dict[Hashable, Any] = self._calc_assign_results(coords_combined) --> 592 data.coords.update(results) 593 return data 594 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\coordinates.py in update(self, other) 160 other_vars = getattr(other, "variables", other) 161 self._maybe_drop_multiindex_coords(set(other_vars)) --> 162 coords, indexes = merge_coords( 163 [self.variables, other_vars], priority_arg=1, indexes=self.xindexes 164 ) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_coords(objects, compat, join, priority_arg, indexes, fill_value) 564 collected = collect_variables_and_indexes(aligned) 565 prioritized = _get_priority_vars_and_indexes(aligned, priority_arg, compat=compat) --> 566 variables, out_indexes = merge_collected(collected, prioritized, compat=compat) 567 return variables, out_indexes 568 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_collected(grouped, prioritized, compat, combine_attrs, equals) 252 253 _assert_compat_valid(compat) --> 254 _assert_prioritized_valid(grouped, prioritized) 255 256 merged_vars: dict[Hashable, Variable] = {} c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in _assert_prioritized_valid(grouped, prioritized) 199 common_names_str = ", ".join(f"{k!r}" for k in common_names) 200 index_names_str = ", ".join(f"{k!r}" for k in index_coord_names) --> 201 raise ValueError( 202 f"cannot set or update variable(s) {common_names_str}, which would corrupt " 203 f"the following index built from coordinates {index_names_str}:\n" ValueError: cannot set or update variable(s) 'one', which would corrupt the following index built from coordinates 'x', 'one', 'three': <xarray.core.indexes.PandasMultiIndex object at 0x00000225AA4B5200> Second Example Starting from two level multindex and dropping one levelValueError Traceback (most recent call last) c:\Users\aspit\Git\Learn\xarray\replace_coord_issue.py in <module> 11 12 new_coords = mda.coords['one'].values*2 ---> 13 mda.reset_index('one', drop=True).assign_coords(one= ('x',new_coords)).set_index(x='one',append=True) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\common.py in assign_coords(self, coords, **coords_kwargs) 590 data = self.copy(deep=False) 591 results: dict[Hashable, Any] = self._calc_assign_results(coords_combined) --> 592 data.coords.update(results) 593 return data 594 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\coordinates.py in update(self, other) 160 other_vars = getattr(other, "variables", other) 161 self._maybe_drop_multiindex_coords(set(other_vars)) --> 162 coords, indexes = merge_coords( 163 [self.variables, other_vars], priority_arg=1, indexes=self.xindexes 164 ) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_coords(objects, compat, join, priority_arg, indexes, fill_value) 564 collected = collect_variables_and_indexes(aligned) 565 prioritized = _get_priority_vars_and_indexes(aligned, priority_arg, compat=compat) --> 566 variables, out_indexes = merge_collected(collected, prioritized, compat=compat) 567 return variables, out_indexes 568 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_collected(grouped, prioritized, compat, combine_attrs, equals) 252 253 _assert_compat_valid(compat) --> 254 _assert_prioritized_valid(grouped, prioritized) 255 256 merged_vars: dict[Hashable, Variable] = {} c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in _assert_prioritized_valid(grouped, prioritized) 199 common_names_str = ", ".join(f"{k!r}" for k in common_names) 200 index_names_str = ", ".join(f"{k!r}" for k in index_coord_names) --> 201 raise ValueError( 202 f"cannot set or update variable(s) {common_names_str}, which would corrupt " 203 f"the following index built from coordinates {index_names_str}:\n" ValueError: cannot set or update variable(s) 'one', which would corrupt the following index built from coordinates 'x', 'one', 'two': <xarray.core.indexes.PandasMultiIndex object at 0x00000225AA53C9E0> ``` Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:15:42) [MSC v.1916 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('English_United States', '1252')
libhdf5: 1.12.1
libnetcdf: 4.8.1
xarray: 2022.6.0
pandas: 1.3.4
numpy: 1.21.4
scipy: 1.7.3
netCDF4: 1.5.8
pydap: None
h5netcdf: 1.0.2
h5py: 3.7.0
Nio: None
zarr: None
cftime: 1.5.1.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.3.5
dask: 2022.02.1
distributed: 2022.2.1
matplotlib: 3.4.3
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.7.1
cupy: None
pint: 0.18
sparse: None
flox: None
numpy_groupies: None
setuptools: 59.1.0
pip: 21.3.1
conda: None
pytest: 6.2.5
IPython: 7.29.0
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
735523592 | MDU6SXNzdWU3MzU1MjM1OTI= | 4562 | Cannot plot multiindexed (stacked) coordinate as hue variable | aspitarl 20617032 | open | 0 | 7 | 2020-11-03T17:53:01Z | 2021-11-16T23:39:09Z | NONE | ``` import numpy as np import pandas as pd import xarray as xr data = np.random.rand(50,5) x_idx = np.linspace(0, 50) mi_idx1 = ['a','b','c','d','e'] mi_idx2 = [1,2,3,4,5] mi = pd.MultiIndex.from_arrays([mi_idx1,mi_idx2], names=['mi_idx1', 'mi_idx2']) coords = { 'x': x_idx, 'mi': mi } da = xr.DataArray(data, coords=coords, dims = ['x', 'mi']) da.plot(hue='mi') ``` It appears since version 0.16.0, that plotting with a multindex coordinate as a hue dimension no longer works. In version 0.15.1 I get a plot like this: However, when upgrading to 0.16.0 or 0.16.1 I get the following traceback: ``` ValueError Traceback (most recent call last) <ipython-input-1-2de8705a6ec5> in <module> 19 da = xr.DataArray(data, coords=coords, dims = ['x', 'mi']) 20 ---> 21 da.plot(hue='mi') ~\anaconda3\envs\datanalysis\lib\site-packages\xarray\plot\plot.py in call(self, kwargs) 444 445 def call(self, kwargs): --> 446 return plot(self._da, **kwargs) 447 448 # we can't use functools.wraps here since that also modifies the name / qualname ~\anaconda3\envs\datanalysis\lib\site-packages\xarray\plot\plot.py in plot(darray, row, col, col_wrap, ax, hue, rtol, subplot_kws, kwargs) 198 kwargs["ax"] = ax 199 --> 200 return plotfunc(darray, kwargs) 201 202 ~\anaconda3\envs\datanalysis\lib\site-packages\xarray\plot\plot.py in line(darray, row, col, figsize, aspect, size, ax, hue, x, y, xincrease, yincrease, xscale, yscale, xticks, yticks, xlim, ylim, add_legend, _labels, args, *kwargs) 293 294 ax = get_axis(figsize, size, aspect, ax) --> 295 xplt, yplt, hueplt, xlabel, ylabel, hue_label = _infer_line_data(darray, x, y, hue) 296 297 # Remove pd.Intervals if contained in xplt.values and/or yplt.values. ~\anaconda3\envs\datanalysis\lib\site-packages\xarray\plot\plot.py in _infer_line_data(darray, x, y, hue) 66 67 if y is None: ---> 68 xname, huename = _infer_xy_labels(darray=darray, x=x, y=hue) 69 xplt = darray[xname] 70 if xplt.ndim > 1: ~\anaconda3\envs\datanalysis\lib\site-packages\xarray\plot\utils.py in _infer_xy_labels(darray, x, y, imshow, rgb) 378 y, x = darray.dims 379 elif x is None: --> 380 _assert_valid_xy(darray, y, "y") 381 x = darray.dims[0] if y == darray.dims[1] else darray.dims[1] 382 elif y is None: ~\anaconda3\envs\datanalysis\lib\site-packages\xarray\plot\utils.py in _assert_valid_xy(darray, xy, name) 410 if xy not in valid_xy: 411 valid_xy_str = "', '".join(sorted(valid_xy)) --> 412 raise ValueError(f"{name} must be one of None, '{valid_xy_str}'") 413 414 ValueError: y must be one of None, 'mi_idx1', 'mi_idx2', 'x' ``` Setting x='x' does not fix the problem Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 142 Stepping 12, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: English_United States.1252 libhdf5: 1.10.4 libnetcdf: 4.7.3 xarray: 0.16.0 pandas: 1.0.5 numpy: 1.18.5 scipy: 1.5.0 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.20.0 distributed: 2.20.0 matplotlib: 3.2.2 cartopy: None seaborn: 0.10.1 numbagg: None pint: 0.16.1 setuptools: 49.2.0.post20200714 pip: 20.1.1 conda: None pytest: 5.4.3 IPython: 7.16.1 sphinx: 3.1.2 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);