issues
3 rows where state = "closed" and user = 20617032 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
485508509 | MDU6SXNzdWU0ODU1MDg1MDk= | 3267 | Resample excecution time is significantly longer in version 0.12 than 0.11 | aspitarl 20617032 | closed | 0 | 5 | 2019-08-26T23:51:11Z | 2023-04-29T03:30:20Z | 2023-04-29T03:30:20Z | NONE | MCVE Code Sample```python import numpy as np import xarray as xr import pandas as pd import time size = 1000000 data = np.random.random(size) times = pd.date_range('2019-01-01', periods=size, freq='ms') da = xr.DataArray(data, dims=['time'], coords={'time': times}) start = time.time() da.resample(time='s').mean() print('Elapsed time: ' + str(time.time() - start)) print('xarray version: ' + str(xr.version)) ```
Elapsed time: 0.2671010494232178 xarray version: 0.11.3
Elapsed time: 6.652455568313599 xarray version: 0.12.0 Expected OutputI expect that for the default parameters, resample should take a similar amount of time as previous versions. Or at least the documentation should specify what parameters must be passed into resample or mean to achieve the same time as in version 0.11.3 Problem Descriptionresampling a dataarray or dataset and then calling mean takes significantly longer in the latest versions of xarray 0.12. The changelog specifies multiple changes to the resample method and new parameters, but its not clear what one would specify to achieve execution time improvements. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
619327957 | MDExOlB1bGxSZXF1ZXN0NDE4ODc5MTU3 | 4067 | make text wrap width an argument in label_from_attrs | aspitarl 20617032 | closed | 0 | 4 | 2020-05-15T23:44:09Z | 2023-03-26T20:06:29Z | 2023-03-26T20:06:28Z | NONE | 0 | pydata/xarray/pulls/4067 | label_from_attrs used textwrap.wrap with a default wrap width of 30, this commit changes label_from_attrs to instead take an argument wrap_width that specifies the wrap width. context: I find the label_from_attrs function useful for plots that I make using pyplot, but needed to change the text wrap with. I am new to contributing to projects like this so bear with me. I only had the mypy and flake8 linters and figured that was good enough for a minor change like this. And I didn't see label_from_attrs in the api document so just added a line to whats-new.rst
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1352621981 | I_kwDOAMm_X85Qn1-d | 6959 | Assigning coordinate level to MultiIndex fails if MultiIndex only has one level | aspitarl 20617032 | closed | 0 | benbovy 4160723 | 0 | 2022-08-26T18:48:18Z | 2022-09-27T10:35:39Z | 2022-09-27T10:35:39Z | NONE | What happened?This issue originates from this discussion where I was trying to figure out the best way to replace coordinate values in a MultiIndex level. I found that removing the level with What did you expect to happen?I expect that removing and replacing a coordinate level would work the same independent of the number of levels in the MultiIndex. Minimal Complete Verifiable Example```Python import numpy as np import pandas as pd import xarray as xr Replace the coordinates in level 'one'. This works as expected.midx = pd.MultiIndex.from_product([[0,1,2], [3, 4], [5,6]], names=("one", "two","three")) mda = xr.DataArray(np.random.rand(12, 3), [("x", midx), ("y", range(3))]) new_coords = mda.coords['one'].values*2 mda.reset_index('one', drop=True).assign_coords(one= ('x',new_coords)).set_index(x='one',append=True) #Works Drop the two level before had such that the intermediate state has a multindexwith only the 'three' level, this throws a ValueErrormda.reset_index('two',drop=True).reset_index('one', drop=True).assign_coords(one= ('x',new_coords)) #ValueError We can intialize a data array with only two levels and only drop the 'one'level, which gives the same ValueError. This shows that the problem is notdue to something with dropping the 'two' level above, but something inherentto dropping to a state with only one multinddex levelmidx = pd.MultiIndex.from_product([[0,1,2], [3, 4]], names=("one", "two")) mda = xr.DataArray(np.random.rand(6, 2), [("x", midx), ("y", range(2))]) new_coords = mda.coords['one'].values*2 mda.reset_index('one', drop=True).assign_coords(one= ('x',new_coords)).set_index(x='one',append=True) #ValueError ``` MVCE confirmation
Relevant log output```Python First example, starting from 3 level multiindex and dropping two levelsValueError Traceback (most recent call last) c:\Users\aspit\Git\Learn\xarray\replace_coord_issue.py in <module> 15 # Drop the two level before had such that the intermediate state has a multindex 16 # with only the 'three' level, this throws a ValueError ---> 17 mda.reset_index('two',drop=True).reset_index('one', drop=True).assign_coords(one= ('x',new_coords)) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\common.py in assign_coords(self, coords, **coords_kwargs) 590 data = self.copy(deep=False) 591 results: dict[Hashable, Any] = self._calc_assign_results(coords_combined) --> 592 data.coords.update(results) 593 return data 594 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\coordinates.py in update(self, other) 160 other_vars = getattr(other, "variables", other) 161 self._maybe_drop_multiindex_coords(set(other_vars)) --> 162 coords, indexes = merge_coords( 163 [self.variables, other_vars], priority_arg=1, indexes=self.xindexes 164 ) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_coords(objects, compat, join, priority_arg, indexes, fill_value) 564 collected = collect_variables_and_indexes(aligned) 565 prioritized = _get_priority_vars_and_indexes(aligned, priority_arg, compat=compat) --> 566 variables, out_indexes = merge_collected(collected, prioritized, compat=compat) 567 return variables, out_indexes 568 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_collected(grouped, prioritized, compat, combine_attrs, equals) 252 253 _assert_compat_valid(compat) --> 254 _assert_prioritized_valid(grouped, prioritized) 255 256 merged_vars: dict[Hashable, Variable] = {} c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in _assert_prioritized_valid(grouped, prioritized) 199 common_names_str = ", ".join(f"{k!r}" for k in common_names) 200 index_names_str = ", ".join(f"{k!r}" for k in index_coord_names) --> 201 raise ValueError( 202 f"cannot set or update variable(s) {common_names_str}, which would corrupt " 203 f"the following index built from coordinates {index_names_str}:\n" ValueError: cannot set or update variable(s) 'one', which would corrupt the following index built from coordinates 'x', 'one', 'three': <xarray.core.indexes.PandasMultiIndex object at 0x00000225AA4B5200> Second Example Starting from two level multindex and dropping one levelValueError Traceback (most recent call last) c:\Users\aspit\Git\Learn\xarray\replace_coord_issue.py in <module> 11 12 new_coords = mda.coords['one'].values*2 ---> 13 mda.reset_index('one', drop=True).assign_coords(one= ('x',new_coords)).set_index(x='one',append=True) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\common.py in assign_coords(self, coords, **coords_kwargs) 590 data = self.copy(deep=False) 591 results: dict[Hashable, Any] = self._calc_assign_results(coords_combined) --> 592 data.coords.update(results) 593 return data 594 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\coordinates.py in update(self, other) 160 other_vars = getattr(other, "variables", other) 161 self._maybe_drop_multiindex_coords(set(other_vars)) --> 162 coords, indexes = merge_coords( 163 [self.variables, other_vars], priority_arg=1, indexes=self.xindexes 164 ) c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_coords(objects, compat, join, priority_arg, indexes, fill_value) 564 collected = collect_variables_and_indexes(aligned) 565 prioritized = _get_priority_vars_and_indexes(aligned, priority_arg, compat=compat) --> 566 variables, out_indexes = merge_collected(collected, prioritized, compat=compat) 567 return variables, out_indexes 568 c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in merge_collected(grouped, prioritized, compat, combine_attrs, equals) 252 253 _assert_compat_valid(compat) --> 254 _assert_prioritized_valid(grouped, prioritized) 255 256 merged_vars: dict[Hashable, Variable] = {} c:\Users\aspit\anaconda3\envs\dataanalysis\lib\site-packages\xarray\core\merge.py in _assert_prioritized_valid(grouped, prioritized) 199 common_names_str = ", ".join(f"{k!r}" for k in common_names) 200 index_names_str = ", ".join(f"{k!r}" for k in index_coord_names) --> 201 raise ValueError( 202 f"cannot set or update variable(s) {common_names_str}, which would corrupt " 203 f"the following index built from coordinates {index_names_str}:\n" ValueError: cannot set or update variable(s) 'one', which would corrupt the following index built from coordinates 'x', 'one', 'two': <xarray.core.indexes.PandasMultiIndex object at 0x00000225AA53C9E0> ``` Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:15:42) [MSC v.1916 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('English_United States', '1252')
libhdf5: 1.12.1
libnetcdf: 4.8.1
xarray: 2022.6.0
pandas: 1.3.4
numpy: 1.21.4
scipy: 1.7.3
netCDF4: 1.5.8
pydap: None
h5netcdf: 1.0.2
h5py: 3.7.0
Nio: None
zarr: None
cftime: 1.5.1.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.3.5
dask: 2022.02.1
distributed: 2022.2.1
matplotlib: 3.4.3
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.7.1
cupy: None
pint: 0.18
sparse: None
flox: None
numpy_groupies: None
setuptools: 59.1.0
pip: 21.3.1
conda: None
pytest: 6.2.5
IPython: 7.29.0
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);