issues
35 rows where user = 22245117 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, closed_at, draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1918317723 | PR_kwDOAMm_X85bfKv_ | 8253 | fix zarr datetime64 chunks | malmans2 22245117 | closed | 0 | 14 | 2023-09-28T21:48:32Z | 2024-01-29T19:12:31Z | 2024-01-29T19:12:31Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8253 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1987252044 | PR_kwDOAMm_X85fHv_I | 8439 | Restore dask arrays rather than editing encoding | malmans2 22245117 | closed | 0 | 0 | 2023-11-10T09:32:23Z | 2023-11-10T09:58:39Z | 2023-11-10T09:58:39Z | CONTRIBUTOR | 1 | pydata/xarray/pulls/8439 | Just to show why restoring dask arrays rather than editing encoding does not work in #8253 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1706179211 | I_kwDOAMm_X85lsjqL | 7837 | Weighted reductions inconsistency when variables have missing dimensions | malmans2 22245117 | open | 0 | 0 | 2023-05-11T16:40:58Z | 2023-11-06T06:01:35Z | CONTRIBUTOR | What happened?There is some inconsistencies in the error raised by weighted reductions when the dimensions over which to apply the reduction are not present in all variables. What did you expect to happen?I think all reduction methods should have the same behaviour.
I'm not sure what's the best behaviour, although I probably prefer the Minimal Complete Verifiable Example```Python import xarray as xr ds = xr.Dataset({"foo": xr.DataArray([[1]] * 2), "bar": 1}) weighted = ds.weighted(ds["dim_0"]) weighted.mean(ds.dims) # OK weighted.std(ds.dims) # ValueError ``` MVCE confirmation
Relevant log output```PythonValueError Traceback (most recent call last) Cell In[1], line 7 4 weighted = ds.weighted(ds["dim_0"]) 6 weighted.mean(ds.dims) # OK ----> 7 weighted.std(ds.dims) # ValueError File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:503, in Weighted.std(self, dim, skipna, keep_attrs) 497 def std( 498 self, 499 dim: Dims = None, 500 skipna: bool | None = None, 501 keep_attrs: bool | None = None, 502 ) -> T_Xarray: --> 503 return self._implementation( 504 self._weighted_std, dim=dim, skipna=skipna, keep_attrs=keep_attrs 505 ) File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:540, in DatasetWeighted._implementation(self, func, dim, kwargs) 537 def _implementation(self, func, dim, kwargs) -> Dataset: 538 self._check_dim(dim) --> 540 return self.obj.map(func, dim=dim, **kwargs) File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/dataset.py:5964, in Dataset.map(self, func, keep_attrs, args, kwargs) 5962 if keep_attrs is None: 5963 keep_attrs = _get_keep_attrs(default=False) -> 5964 variables = { 5965 k: maybe_wrap_array(v, func(v, *args, kwargs)) 5966 for k, v in self.data_vars.items() 5967 } 5968 if keep_attrs: 5969 for k, v in variables.items(): File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/dataset.py:5965, in <dictcomp>(.0) 5962 if keep_attrs is None: 5963 keep_attrs = _get_keep_attrs(default=False) 5964 variables = { -> 5965 k: maybe_wrap_array(v, func(v, args, *kwargs)) 5966 for k, v in self.data_vars.items() 5967 } 5968 if keep_attrs: 5969 for k, v in variables.items(): File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:309, in Weighted._weighted_std(self, da, dim, skipna)
301 def _weighted_std(
302 self,
303 da: DataArray,
304 dim: Dims = None,
305 skipna: bool | None = None,
306 ) -> DataArray:
307 """Reduce a DataArray by a weighted File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:295, in Weighted._weighted_var(self, da, dim, skipna)
287 def _weighted_var(
288 self,
289 da: DataArray,
290 dim: Dims = None,
291 skipna: bool | None = None,
292 ) -> DataArray:
293 """Reduce a DataArray by a weighted File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:259, in Weighted._sum_of_squares(self, da, dim, skipna)
251 def _sum_of_squares(
252 self,
253 da: DataArray,
254 dim: Dims = None,
255 skipna: bool | None = None,
256 ) -> DataArray:
257 """Reduce a DataArray by a weighted File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:483, in Weighted.mean(self, dim, skipna, keep_attrs) 477 def mean( 478 self, 479 dim: Dims = None, 480 skipna: bool | None = None, 481 keep_attrs: bool | None = None, 482 ) -> T_Xarray: --> 483 return self._implementation( 484 self._weighted_mean, dim=dim, skipna=skipna, keep_attrs=keep_attrs 485 ) File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:529, in DataArrayWeighted._implementation(self, func, dim, kwargs) 528 def _implementation(self, func, dim, kwargs) -> DataArray: --> 529 self._check_dim(dim) 531 dataset = self.obj._to_temp_dataset() 532 dataset = dataset.map(func, dim=dim, **kwargs) File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:203, in Weighted._check_dim(self, dim) 201 missing_dims = set(dims) - set(self.obj.dims) - set(self.weights.dims) 202 if missing_dims: --> 203 raise ValueError( 204 f"{self.class.name} does not contain the dimensions: {missing_dims}" 205 ) ValueError: DataArrayWeighted does not contain the dimensions: {'dim_1'} ``` Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 09:05:00) [Clang 14.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 22.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: (None, 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2023.4.2
pandas: 2.0.1
numpy: 1.24.3
scipy: None
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: None
nc_time_axis: None
PseudoNetCDF: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 67.7.2
pip: 23.1.2
conda: None
pytest: None
mypy: None
IPython: 8.13.2
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7837/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1355807694 | I_kwDOAMm_X85Qz_vO | 6970 | empty attributes silently change | malmans2 22245117 | closed | 0 | 3 | 2022-08-30T13:51:14Z | 2023-09-09T04:53:20Z | 2023-09-09T04:53:20Z | CONTRIBUTOR | What happened?When What did you expect to happen?In the example below, the tokens should be identical. Minimal Complete Verifiable Example```Python import xarray as xr import dask ds = xr.Dataset({"foo": [0]}) # the assertion below would be OK if I specify attrs={} token0 = dask.base.tokenize(ds) print(ds) # this could be anything that uses attrs under the hood token1 = dask.base.tokenize(ds) assert token0 == token1 AssertionError:``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?I thought we could store But a few tests failed, so it's probably the wrong way to fix this: ``` _________________ TestDask.test_attrs_mfdataset __________________ self = <xarray.tests.test_backends.TestDask object at 0x151cf2ef0>
/Users/mattia/MyGit/xarray/xarray/tests/test_backends.py:3576: Failed __________________ TestDask.test_open_mfdataset_attrs_file ___________________ self = <xarray.tests.test_backends.TestDask object at 0x151cf1de0>
/Users/mattia/MyGit/xarray/xarray/tests/test_backends.py:3594: AssertionError ________________ TestDask.test_open_mfdataset_attrs_file_path ________________ self = <xarray.tests.test_backends.TestDask object at 0x151cf1c30>
/Users/mattia/MyGit/xarray/xarray/tests/test_backends.py:3613: AssertionError ============================================================================================================================== warnings summary ============================================================================================================================== ../../miniconda3/envs/xarray/lib/python3.10/site-packages/seaborn/rcmod.py:82 /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/seaborn/rcmod.py:82: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if LooseVersion(mpl.version) >= "3.0": ../../miniconda3/envs/xarray/lib/python3.10/site-packages/setuptools/_distutils/version.py:346 xarray/tests/test_backends.py::TestNetCDF4Data::test_zero_dimensional_variable /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. other = LooseVersion(other) xarray/tests/test_array_api.py:10 /Users/mattia/MyGit/xarray/xarray/tests/test_array_api.py:10: UserWarning: The numpy.array_api submodule is still experimental. See NEP 47. import numpy.array_api as xp # isort:skip xarray/tests/test_accessor_dt.py: 7 warnings xarray/tests/test_cftime_offsets.py: 5 warnings xarray/tests/test_cftimeindex.py: 64 warnings xarray/tests/test_cftimeindex_resample.py: 488 warnings xarray/tests/test_missing.py: 2 warnings /Users/mattia/MyGit/xarray/xarray/coding/times.py:365: FutureWarning: Index.ravel returning ndarray is deprecated; in a future version this will return a view on self. sample = dates.ravel()[0] xarray/tests/test_backends.py::TestNetCDF4Data::test_zero_dimensional_variable /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/cfgrib/xarray_plugin.py:11: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if LooseVersion(xr.version) <= "0.17.0": xarray/tests/test_backends.py::TestDask::test_inline_array /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/_pytest/python.py:192: RuntimeWarning: deallocating CachingFileManager(<class 'netCDF4._netCDF4.Dataset'>, '/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmp7ww27uxi/temp-2317.nc', mode='r', kwargs={'clobber': True, 'diskless': False, 'persist': False, 'format': 'NETCDF4'}), but file is not already closed. This may indicate a bug. result = testfunction(**testargs) xarray/tests/test_backends.py::test_open_fsspec /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/fsspec/implementations/cached.py:589: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmp8f034evp/0d56871fd8c14f69c81fd11bcd488de08bd1efce70691c6143d2a5f88be9ca84'> out[p] = open(fn, "rb").read() Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. xarray/tests/test_backends.py::test_open_fsspec /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/fsspec/implementations/cached.py:589: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmp8f034evp/836ec38b21b701a0aae052168a2a2eab45504e6c6ba441f202e77f4f79b1a7c4'> out[p] = open(fn, "rb").read() Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. xarray/tests/test_backends.py::test_open_fsspec /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/fsspec/implementations/cached.py:589: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmpmf5ddc2u/5b50e5f9d1df25a3c114c62d7a9dcfcd80615885572d4e2cb48894b48a393262'> out[p] = open(fn, "rb").read() Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. xarray/tests/test_backends.py::test_open_fsspec /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/fsspec/implementations/cached.py:589: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmpmf5ddc2u/0d56871fd8c14f69c81fd11bcd488de08bd1efce70691c6143d2a5f88be9ca84'> out[p] = open(fn, "rb").read() Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. xarray/tests/test_backends.py::test_open_fsspec /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/fsspec/implementations/cached.py:589: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmpmf5ddc2u/77aeffecc910b7c6882406131a4d36469d935a55c401298dbce90adb89d7d275'> out[p] = open(fn, "rb").read() Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. xarray/tests/test_backends.py::test_open_fsspec /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/fsspec/implementations/cached.py:589: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/_x/gdn6kyqn5d5g9j_ygdpcv5vm0000gp/T/tmpmf5ddc2u/836ec38b21b701a0aae052168a2a2eab45504e6c6ba441f202e77f4f79b1a7c4'> out[p] = open(fn, "rb").read() Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. xarray/tests/test_calendar_ops.py: 14 warnings
xarray/tests/test_cftime_offsets.py: 12 warnings
xarray/tests/test_computation.py: 4 warnings
/Users/mattia/MyGit/xarray/xarray/coding/cftime_offsets.py:1130: FutureWarning: Argument xarray/tests/test_dataarray.py::TestDataArray::test_to_and_from_cdms2_sgrid xarray/tests/test_dataarray.py::TestDataArray::test_to_and_from_cdms2_sgrid xarray/tests/test_dataarray.py::TestDataArray::test_to_and_from_cdms2_sgrid xarray/tests/test_dataarray.py::TestDataArray::test_to_and_from_cdms2_sgrid /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/numpy/ma/core.py:7891: DeprecationWarning: elementwise comparison failed; this will raise an error in the future. if not np.all(xinf == filled(np.isinf(y), False)): xarray/tests/test_dataset.py: 12 warnings
xarray/tests/test_units.py: 20 warnings
/Users/mattia/MyGit/xarray/xarray/core/common.py:1079: PendingDeprecationWarning: dropping variables using xarray/tests/test_distributed.py::test_open_mfdataset_can_open_files_with_cftime_index xarray/tests/test_distributed.py::test_open_mfdataset_can_open_files_with_cftime_index /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/tornado/ioloop.py:263: DeprecationWarning: There is no current event loop loop = asyncio.get_event_loop() xarray/tests/test_distributed.py::test_open_mfdataset_can_open_files_with_cftime_index /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/tornado/platform/asyncio.py:326: DeprecationWarning: There is no current event loop self.old_asyncio = asyncio.get_event_loop() xarray/tests/test_distributed.py::test_open_mfdataset_can_open_files_with_cftime_index /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/tornado/platform/asyncio.py:193: DeprecationWarning: There is no current event loop old_loop = asyncio.get_event_loop() xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/dask/array/reductions.py:611: RuntimeWarning: All-NaN slice encountered return np.nanmin(x_chunk, axis=axis, keepdims=keepdims) xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] xarray/tests/test_duck_array_ops.py::test_datetime_mean[True] /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/dask/array/reductions.py:611: RuntimeWarning: All-NaN axis encountered return np.nanmin(x_chunk, axis=axis, keepdims=keepdims) xarray/tests/test_groupby.py::test_groupby_drops_nans /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/flox/aggregate_flox.py:105: RuntimeWarning: invalid value encountered in true_divide out /= nanlen(group_idx, array, size=size, axis=axis, fill_value=0) xarray/tests/test_plot.py::TestFacetGrid::test_facetgrid_polar xarray/tests/test_plot.py::TestFacetGrid::test_facetgrid_polar xarray/tests/test_plot.py::TestFacetGrid::test_facetgrid_polar /Users/mattia/MyGit/xarray/xarray/plot/plot.py:1478: MatplotlibDeprecationWarning: Auto-removal of grids by pcolor() and pcolormesh() is deprecated since 3.5 and will be removed two minor releases later; please call grid(False) first. primitive = ax.pcolormesh(x, y, z, **kwargs) xarray/tests/test_print_versions.py::test_show_versions /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") xarray/tests/test_variable.py::TestVariableWithDask::test_eq_all_dtypes xarray/tests/test_variable.py::TestVariableWithDask::test_eq_all_dtypes xarray/tests/test_variable.py::TestVariableWithDask::test_eq_all_dtypes xarray/tests/test_variable.py::TestVariableWithDask::test_eq_all_dtypes /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/dask/core.py:119: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison return func(*(_execute_task(a, cache) for a in args)) xarray/tests/test_weighted.py::test_weighted_quantile_equal_weights[1-True-0.5-da2] xarray/tests/test_weighted.py::test_weighted_quantile_equal_weights[1-True-q1-da2] xarray/tests/test_weighted.py::test_weighted_quantile_equal_weights[3.14-True-0.5-da2] xarray/tests/test_weighted.py::test_weighted_quantile_equal_weights[3.14-True-q1-da2] /Users/mattia/miniconda3/envs/xarray/lib/python3.10/site-packages/numpy/lib/nanfunctions.py:1560: RuntimeWarning: All-NaN slice encountered r, k = function_base._ureduce(a, -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ========================================================================================================================== short test summary info =========================================================================================================================== FAILED xarray/tests/test_backends.py::TestDask::test_attrs_mfdataset - Failed: DID NOT RAISE <class 'AttributeError'> FAILED xarray/tests/test_backends.py::TestDask::test_open_mfdataset_attrs_file - AssertionError: assert 'test1' not in {'test1': 'foo', 'test2': 'bar'} FAILED xarray/tests/test_backends.py::TestDask::test_open_mfdataset_attrs_file_path - AssertionError: assert 'test1' not in {'test1': 'foo', 'test2': 'bar'} ===================================================================================== 3 failed, 14484 passed, 1189 skipped, 211 xfailed, 65 xpassed, 671 warnings in 1223.86s (0:20:23) ====================================================================================== /Users/mattia/miniconda3/envs/xarray/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 31 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' ``` Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:43:44) [Clang 13.0.1 ]
python-bits: 64
OS: Darwin
OS-release: 21.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: (None, 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.8.1
xarray: 2022.6.0
pandas: 1.4.3
numpy: 1.23.2
scipy: None
netCDF4: 1.6.0
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.12.0
cftime: 1.6.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: 0.9.10.1
iris: None
bottleneck: None
dask: 2022.8.1
distributed: 2022.8.1
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.7.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 65.3.0
pip: 22.2.2
conda: None
pytest: 7.1.2
IPython: 8.4.0
sphinx: 5.1.1
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1862636081 | PR_kwDOAMm_X85Yj6RX | 8101 | Fix tokenize with empty attrs | malmans2 22245117 | closed | 0 | 1 | 2023-08-23T06:13:05Z | 2023-09-09T04:53:19Z | 2023-09-09T04:53:19Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8101 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1746328632 | PR_kwDOAMm_X85Sb9Mu | 7900 | fix polyfit changing the original object | malmans2 22245117 | closed | 0 | 1 | 2023-06-07T17:00:21Z | 2023-06-09T15:38:00Z | 2023-06-09T15:37:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7900 |
~New functions/methods are listed in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1717209758 | I_kwDOAMm_X85mWoqe | 7851 | Add pop methods | malmans2 22245117 | open | 0 | 1 | 2023-05-19T12:58:17Z | 2023-05-19T14:01:42Z | CONTRIBUTOR | Is your feature request related to a problem?It's not related to a problem. I would find useful to have pop methods. Describe the solution you'd likeIs it feasible to add For example, instead of doing this: ```python import xarray as xr ds = xr.Dataset({"foo": None})
foo = ds["foo"]
ds = ds.drop_vars("foo")
Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7851/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
327101646 | MDU6SXNzdWUzMjcxMDE2NDY= | 2192 | Subplots overlap each other using plot() and cartopy | malmans2 22245117 | closed | 0 | 1 | 2018-05-28T19:20:52Z | 2023-04-28T09:06:22Z | 2023-04-28T09:06:22Z | CONTRIBUTOR | When subplots are narrow (e.g., small lon range and large lat range), they overlap each other.
I'm not sure, but I think that
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2192/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1605486189 | PR_kwDOAMm_X85LDxq5 | 7575 | fix nczarr when libnetcdf>4.8.1 | malmans2 22245117 | closed | 0 | 1 | 2023-03-01T18:47:09Z | 2023-03-02T16:49:23Z | 2023-03-02T16:49:23Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7575 |
~User visible changes (including notable bug fixes) are documented in The latest version of netcdf-c does not allow writing a NCZarr file without the xarray's |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1248389852 | PR_kwDOAMm_X844dcln | 6636 | Use `zarr` to validate attrs when writing to zarr | malmans2 22245117 | closed | 0 | 2 | 2022-05-25T16:46:03Z | 2022-06-03T18:48:54Z | 2022-06-03T18:48:48Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6636 |
I think we can just use zarr to validate attributes, so we can support all types allowed by zarr. Note that I removed the checks on the keys, as I believe we can rely on zarr for that as well. However, there is an issue with mixed types (e.g., cc: @wankoelias @rabernat |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1244833334 | I_kwDOAMm_X85KMqY2 | 6628 | `{full,zeros,ones}_like` should return objects with the same type as the input object | malmans2 22245117 | closed | 0 | 1 | 2022-05-23T09:09:55Z | 2022-05-24T04:41:22Z | 2022-05-24T04:41:22Z | CONTRIBUTOR | What happened?I'm getting issues using mypy with the changes to the typing of cc: @headtr1ck What did you expect to happen?I think the object returned should be of the same type as the input object rather than Minimal Complete Verifiable Example```Python import xarray as xr def test_zeros_like(da: xr.DataArray) -> xr.DataArray: return xr.zeros_like(da) ``` MVCE confirmation
Relevant log output
Anything else we need to know?
Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:43:32) [Clang 12.0.1 ]
python-bits: 64
OS: Darwin
OS-release: 21.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: (None, 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2022.3.1.dev111+g4da7fdbd
pandas: 1.4.2
numpy: 1.22.3
scipy: 1.8.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.11.3
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: 1.2.10
cfgrib: None
iris: None
bottleneck: None
dask: 2022.05.0
distributed: 2022.5.0
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.5.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 62.3.2
pip: 22.1.1
conda: None
pytest: 7.1.2
IPython: None
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1234204439 | I_kwDOAMm_X85JkHcX | 6600 | `polyval` returns objects with different dimension order | malmans2 22245117 | closed | 0 | 2 | 2022-05-12T16:07:50Z | 2022-05-12T19:01:58Z | 2022-05-12T19:01:58Z | CONTRIBUTOR | What is your issue?I noticed that the dimension order of the object returned by the latest values = np.array( [ "2021-04-01T05:25:19.000000000", "2021-04-01T05:25:29.000000000", "2021-04-01T05:25:39.000000000", "2021-04-01T05:25:49.000000000", "2021-04-01T05:25:59.000000000", "2021-04-01T05:26:09.000000000", ], dtype="datetime64[ns]", ) azimuth_time = xr.DataArray( values, name="azimuth_time", coords={"azimuth_time": values - values[0]} ) polyfit_coefficients = xr.DataArray( [ [2.33333335e-43, 1.62499999e-43, 2.79166678e-43], [-1.15316667e-30, 1.49518518e-31, 9.08833333e-31], [-2.50272583e-18, -1.23851062e-18, -2.99098229e-18], [5.83965193e-06, -1.53321770e-07, -4.84640242e-06], [4.44739216e06, 1.45053974e06, 5.29960857e06], ], dims=("degree", "axis"), coords={"axis": [0, 1, 2], "degree": [4, 3, 2, 1, 0]}, ) ds_out = xr.polyval(azimuth_time.coords["azimuth_time"], polyfit_coefficients)
print(ds_out.dims)
xarray v2022.3.1.dev103+gfc282d59 ('axis', 'azimuth_time') ``` Is this the expected behaviour? If yes, is it worth mentioning this change in what's new/breaking changes? cc: @headtr1ck |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1233717699 | I_kwDOAMm_X85JiQnD | 6597 | `polyval` with timedelta64 coordinates produces wrong results | malmans2 22245117 | closed | 0 | 3 | 2022-05-12T09:33:24Z | 2022-05-12T15:43:29Z | 2022-05-12T15:43:29Z | CONTRIBUTOR | What happened?I'm not sure if this is a bug or an expected breaking change, but I'm not able to reproduce the results generated by What did you expect to happen?Both the stable and latest Minimal Complete Verifiable Example```Python import xarray as xr import numpy as np values = np.array( [ "2021-04-01T05:25:19.000000000", "2021-04-01T05:25:29.000000000", "2021-04-01T05:25:39.000000000", "2021-04-01T05:25:49.000000000", "2021-04-01T05:25:59.000000000", "2021-04-01T05:26:09.000000000", ], dtype="datetime64[ns]", ) azimuth_time = xr.DataArray( values, name="azimuth_time", coords={"azimuth_time": values - values[0]} ) polyfit_coefficients = xr.DataArray( [ [2.33333335e-43, 1.62499999e-43, 2.79166678e-43], [-1.15316667e-30, 1.49518518e-31, 9.08833333e-31], [-2.50272583e-18, -1.23851062e-18, -2.99098229e-18], [5.83965193e-06, -1.53321770e-07, -4.84640242e-06], [4.44739216e06, 1.45053974e06, 5.29960857e06], ], dims=("degree", "axis"), coords={"axis": [0, 1, 2], "degree": [4, 3, 2, 1, 0]}, ) print(xr.polyval(azimuth_time, polyfit_coefficients)) ``` MVCE confirmation
Relevant log output```Python v2022.3.0 (Correct results)<xarray.DataArray (azimuth_time: 6, axis: 3)> array([[4447392.16 , 1450539.74 , 5299608.57 ], [4505537.25588366, 1448882.82238152, 5250846.359196 ], [4563174.92026797, 1446979.12250014, 5201491.44401733], [4620298.31815291, 1444829.59596699, 5151549.377964 ], [4676900.67053846, 1442435.23739315, 5101025.78153601], [4732975.25442459, 1439797.08038974, 5049926.34223336]]) Coordinates: * azimuth_time (azimuth_time) datetime64[ns] 2021-04-01T05:25:19 ... 2021-... * axis (axis) int64 0 1 2 v2022.3.1.dev102+g6bb2b855 (Wrong results)<xarray.DataArray (axis: 3, azimuth_time: 6)> array([[1.59620685e+30, 1.59620689e+30, 1.59620693e+30, 1.59620697e+30, 1.59620700e+30, 1.59620704e+30], [1.11164807e+30, 1.11164810e+30, 1.11164812e+30, 1.11164815e+30, 1.11164818e+30, 1.11164821e+30], [1.90975722e+30, 1.90975727e+30, 1.90975732e+30, 1.90975736e+30, 1.90975741e+30, 1.90975746e+30]]) Coordinates: * axis (axis) int64 0 1 2 * azimuth_time (azimuth_time) timedelta64[ns] 00:00:00 00:00:10 ... 00:00:50 ``` Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:43:32) [Clang 12.0.1 ]
python-bits: 64
OS: Darwin
OS-release: 21.4.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
LOCALE: (None, 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2022.3.0 or 2022.3.1.dev102+g6bb2b855
pandas: 1.4.2
numpy: 1.22.3
scipy: 1.8.0
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.11.3
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: 1.2.10
cfgrib: None
iris: None
bottleneck: None
dask: 2022.05.0
distributed: 2022.5.0
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.3.0
cupy: None
pint: None
sparse: None
setuptools: 62.2.0
pip: 22.1
conda: None
pytest: 7.1.2
IPython: None
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
620514214 | MDU6SXNzdWU2MjA1MTQyMTQ= | 4077 | open_mfdataset overwrites variables with different values but overlapping coordinates | malmans2 22245117 | open | 0 | 12 | 2020-05-18T21:22:09Z | 2022-04-28T15:08:53Z | CONTRIBUTOR | In the example below I'm opening and concatenating two datasets using Is this the expected default behavior? I would expect to get at least a warning, but maybe I'm misunderstanding the default arguments. I tried to play with the arguments, but I couldn't figure out which argument I should change to get an error in these scenarios. MCVE Code Sample
VersionsOutput of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 08:20:52) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-29-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.15.1 pandas: 1.0.3 numpy: 1.18.4 scipy: None netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.1.3 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.16.0 distributed: 2.16.0 matplotlib: None cartopy: None seaborn: None numbagg: None setuptools: 46.4.0.post20200518 pip: 20.1 conda: None pytest: None IPython: 7.13.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1183534905 | PR_kwDOAMm_X841KS8J | 6420 | Add support in the "zarr" backend for reading NCZarr data | malmans2 22245117 | closed | 0 | 6 | 2022-03-28T14:32:27Z | 2022-04-14T15:36:14Z | 2022-04-14T15:36:05Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6420 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6420/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1172229856 | I_kwDOAMm_X85F3s7g | 6374 | Should the zarr backend support NCZarr conventions? | malmans2 22245117 | closed | 0 | 18 | 2022-03-17T11:00:17Z | 2022-04-14T15:36:05Z | 2022-04-14T15:36:05Z | CONTRIBUTOR | What is your issue?As part of the CZI EOSS4 grant, at B-Open we are keen on improving xarray/zarr cross-community conventions. It looks like xarray's Currently, it is possible to open a ds = xr.Dataset( { "a": (("y", "x"), np.random.rand(6).reshape(2, 3)), "b": (("y", "x"), np.random.rand(6).reshape(2, 3)), }, coords={"y": [0, 1], "x": [10, 20, 30]}, ) ds.to_netcdf("file://test.nczarr#mode=nczarr") ds_from_nczarr = xr.open_dataset("file://test.nczarr#mode=nczarr", engine="netcdf4") xr.testing.assert_identical(ds, ds_from_nczarr) xr.open_dataset("test.nczarr", engine="zarr") KeyError: 'Zarr object is missing the attribute
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6374/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
559283550 | MDU6SXNzdWU1NTkyODM1NTA= | 3745 | groupby drops the variable used to group | malmans2 22245117 | open | 0 | 0 | 2020-02-03T19:25:06Z | 2022-04-09T02:25:17Z | CONTRIBUTOR | MCVE Code Sample
Seasonal meands_season = ds.groupby('time.season').mean() ds_season ``` <xarray.Dataset> Dimensions: (season: 4, x: 275, y: 205) Coordinates: yc (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51 xc (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91 * season (season) object 'DJF' 'JJA' 'MAM' 'SON' Dimensions without coordinates: x, y Data variables: Tair (season, y, x) float64 nan nan nan nan ... 23.13 22.06 21.72 21.94 ```python The seasons are ordered in alphabetical order.I want to sort them based on time.But time was dropped, so I have to do this:time_season = ds['time'].groupby('time.season').mean() ds_season.sortby(time_season) ``` <xarray.Dataset> Dimensions: (season: 4, x: 275, y: 205) Coordinates: yc (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51 xc (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91 * season (season) object 'SON' 'DJF' 'MAM' 'JJA' Dimensions without coordinates: x, y Data variables: Tair (season, y, x) float64 nan nan nan nan ... 29.27 28.39 27.94 28.05 Expected Output```python Why does groupby drop time?I would expect a dataset that looks like this:ds_season['time'] = time_season ds_season ``` <xarray.Dataset> Dimensions: (season: 4, x: 275, y: 205) Coordinates: yc (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51 xc (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91 * season (season) object 'DJF' 'JJA' 'MAM' 'SON' Dimensions without coordinates: x, y Data variables: Tair (season, y, x) float64 nan nan nan nan ... 23.13 22.06 21.72 21.94 time (season) object 1982-01-16 12:00:00 ... 1981-10-17 00:00:00 Problem DescriptionI often use Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
323839238 | MDU6SXNzdWUzMjM4MzkyMzg= | 2145 | Dataset.resample() adds time dimension to independant variables | malmans2 22245117 | open | 0 | 5 | 2018-05-17T01:15:01Z | 2022-03-21T05:15:52Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python ds = ds.resample(time='1D',keep_attrs=True).mean() ``` Problem descriptionI'm downsampling in time a dataset which also contains timeless variables. I've noticed that resample adds the time dimension to the timeless variables. One workaround is: 1) Split the dataset in a timeless and a time-dependent dataset 2) Resample the time-dependent dataset 3) Merge the two datasets This is not a big deal, but I was wondering if I'm missing some flag that avoids this behavior. If not, is it something that can be easily implemented in resample? It would be very useful for datasets with variables on staggered grids. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2145/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
925444927 | MDU6SXNzdWU5MjU0NDQ5Mjc= | 5495 | Add `typing-extensions` to the list of dependencies? | malmans2 22245117 | closed | 0 | 8 | 2021-06-19T18:26:36Z | 2021-08-07T15:28:39Z | 2021-07-22T23:02:03Z | CONTRIBUTOR |
However,
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
926503178 | MDExOlB1bGxSZXF1ZXN0Njc0Nzk4NTAz | 5507 | specify typing-extensions version | malmans2 22245117 | closed | 0 | 1 | 2021-06-21T18:58:50Z | 2021-07-02T12:45:35Z | 2021-07-02T12:45:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5507 |
See: https://github.com/pydata/xarray/pull/5503#discussion_r655550526 cc: @dcherian |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
925981767 | MDExOlB1bGxSZXF1ZXN0Njc0MzUxMjQw | 5503 | Add typing-extensions to dependencies | malmans2 22245117 | closed | 0 | 3 | 2021-06-21T08:47:44Z | 2021-06-21T16:51:18Z | 2021-06-21T15:13:21Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5503 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
912932344 | MDExOlB1bGxSZXF1ZXN0NjYzMDM3MzU0 | 5445 | Add `xr.unify_chunks()` top level method | malmans2 22245117 | closed | 0 | 7 | 2021-06-06T19:51:47Z | 2021-06-21T08:53:40Z | 2021-06-16T14:56:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5445 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
904865813 | MDExOlB1bGxSZXF1ZXN0NjU1OTg2NTg4 | 5393 | Don't drop unreduced variables | malmans2 22245117 | closed | 0 | 4 | 2021-05-28T07:40:30Z | 2021-06-21T08:53:31Z | 2021-06-12T17:45:00Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5393 |
Reduce methods such as cc: @rcaneill |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
904934839 | MDExOlB1bGxSZXF1ZXN0NjU2MDQ5NDcx | 5394 | Allow selecting variables using a list with mixed data types | malmans2 22245117 | closed | 0 | 2 | 2021-05-28T08:24:23Z | 2021-06-21T08:53:29Z | 2021-06-12T17:44:05Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5394 |
Lists passed to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
912477313 | MDExOlB1bGxSZXF1ZXN0NjYyNjMyNzI2 | 5440 | Consistent chunking after broadcasting | malmans2 22245117 | closed | 0 | 1 | 2021-06-05T22:27:39Z | 2021-06-21T08:53:12Z | 2021-06-21T08:53:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5440 |
I think it does the job, although I'm not sure whether this is the best approach. A couple of questions:
1. I should probably add a test. Where is the best place? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
911393744 | MDU6SXNzdWU5MTEzOTM3NDQ= | 5435 | Broadcast does not return Datasets with unified chunks | malmans2 22245117 | open | 0 | 3 | 2021-06-04T11:09:29Z | 2021-06-16T17:41:12Z | CONTRIBUTOR | What happened: If I broadcast a Dataset with chunked DataArrays, the resulting DataArrays are chunked differently. What you expected to happen: If I broadcast a dataset with 2 vectors of chunk size 1, I'd expect to get 2D arrays with chunksize (1, 1), rather than (1, N) and (M, 1). Minimal Complete Verifiable Example:
Anything else we need to know?: Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.5 (default, May 18 2021, 19:34:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.8.0-53-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.3 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.05.1 distributed: 2021.05.1 matplotlib: None cartopy: None seaborn: None numbagg: None pint: None setuptools: 52.0.0.post20210125 pip: 21.1.2 conda: None pytest: None IPython: 7.24.1 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
903983811 | MDU6SXNzdWU5MDM5ODM4MTE= | 5387 | KeyError when trying to select a list of DataArrays with different name type | malmans2 22245117 | closed | 0 | 6 | 2021-05-27T16:49:27Z | 2021-06-12T17:44:05Z | 2021-06-12T17:44:05Z | CONTRIBUTOR | What happened: Looks like I can't select a list of DataArrays with different name type. What you expected to happen: If this is not a bug, consider raising a more informative error. Minimal Complete Verifiable Example: ```python import xarray as xr from xarray import Dataset, DataArray keys = ["foo", 1] ds = Dataset() for key in keys: ds[key] = DataArray() ds[keys]
Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:13:33) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.8.0-53-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.8.0 xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.3 scipy: None netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.05.0 distributed: 2021.05.0 matplotlib: 3.4.2 cartopy: None seaborn: None numbagg: None pint: 0.17 setuptools: 49.6.0.post20210108 pip: 21.1.2 conda: None pytest: 6.2.4 IPython: 7.23.1 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
897210919 | MDU6SXNzdWU4OTcyMTA5MTk= | 5354 | Should weighted operations raise an error when dimensions don't exist? | malmans2 22245117 | closed | 0 | 2 | 2021-05-20T17:57:05Z | 2021-05-23T23:45:47Z | 2021-05-23T23:45:47Z | CONTRIBUTOR | What happened: Weighted operations don't raise an error when the dimensions passed don't exist. What you expected to happen: This is not really a bug, but I find it a bit confusing because it's not consistent with the same "unweighted" operation. Minimal Complete Verifiable Example:
Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:13:33) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1062.18.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: ('en_GB', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.18.1.dev30+g2578fc3 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.3 netCDF4: 1.5.6 pydap: installed h5netcdf: 0.11.0 h5py: 3.2.1 Nio: None zarr: 2.8.1 cftime: 1.4.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: 1.2.3 cfgrib: 0.9.9.0 iris: None bottleneck: 1.3.2 dask: 2021.05.0 distributed: 2021.05.0 matplotlib: 3.4.2 cartopy: 0.19.0.post1 seaborn: 0.11.1 numbagg: installed pint: None setuptools: 49.6.0.post20210108 pip: 21.1.1 conda: None pytest: None IPython: 7.23.1 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
898841079 | MDExOlB1bGxSZXF1ZXN0NjUwNjU2MDg1 | 5362 | Check dimensions before applying weighted operations | malmans2 22245117 | closed | 0 | 4 | 2021-05-22T16:51:54Z | 2021-05-23T23:45:47Z | 2021-05-23T23:45:47Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5362 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
841097432 | MDExOlB1bGxSZXF1ZXN0NjAwODgxNzE4 | 5076 | Improve map_blocks docs | malmans2 22245117 | closed | 0 | 1 | 2021-03-25T16:23:35Z | 2021-03-29T17:46:02Z | 2021-03-29T17:45:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5076 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
839914235 | MDU6SXNzdWU4Mzk5MTQyMzU= | 5071 | Applying a function on a subset of variables using `map_blocks` is much slower | malmans2 22245117 | closed | 0 | 1 | 2021-03-24T16:39:08Z | 2021-03-29T17:45:58Z | 2021-03-29T17:45:58Z | CONTRIBUTOR | What happened:
Looks like when I use What you expected to happen: In the example below, I wouldn't expect such a difference in computation time. Minimal Complete Verifiable Example:
```python %%timeit Subsample the dataset before calling map_blocksfunc(ds).map_blocks(func).compute() ```
Anything else we need to know?: Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.2 | packaged by conda-forge | (default, Feb 21 2021, 05:02:46) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1062.18.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.3 numpy: 1.20.1 scipy: 1.6.1 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.03.0 distributed: 2021.03.0 matplotlib: 3.3.4 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: 6.2.2 IPython: 7.21.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
674379292 | MDU6SXNzdWU2NzQzNzkyOTI= | 4319 | KeyError when faceting along time dimensions | malmans2 22245117 | closed | 0 | 4 | 2020-08-06T14:57:15Z | 2020-08-06T15:43:43Z | 2020-08-06T15:43:43Z | CONTRIBUTOR | What happened:
I think the latest Minimal Complete Verifiable Example:
Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 02:25:08) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.4.0-42-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.0 pandas: 1.1.0 numpy: 1.19.1 scipy: 1.5.2 netCDF4: 1.5.4 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.4.0 cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.22.0 distributed: 2.22.0 matplotlib: 3.3.0 cartopy: 0.18.0 seaborn: None numbagg: None pint: None setuptools: 49.2.1.post20200802 pip: 20.2.1 conda: None pytest: 6.0.1 IPython: 7.17.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
557931967 | MDU6SXNzdWU1NTc5MzE5Njc= | 3734 | Wrong facet plots when all 2D arrays have one value only | malmans2 22245117 | closed | 0 | 1 | 2020-01-31T06:00:15Z | 2020-04-03T19:48:54Z | 2020-04-03T19:48:54Z | CONTRIBUTOR | MCVE Code Sample
```python Create DataArrayda = xr.DataArray(np.zeros((10, 10, 4))) ``` ```python Default plotWrong: all of them should be 0.da.plot(col='dim_2') ```
Expected Output```python Providing colorbar limitsCorrect.da.plot(col='dim_2', vmin=-.1, vmax=.1) ```
Problem DescriptionIf I simply use Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
397063221 | MDU6SXNzdWUzOTcwNjMyMjE= | 2662 | open_mfdataset in v.0.11.1 is very slow | malmans2 22245117 | closed | 0 | 6 | 2019-01-08T19:59:47Z | 2019-01-17T13:05:43Z | 2019-01-17T13:05:43Z | CONTRIBUTOR | I have several repositories corresponding to different time periods of time.
Each repository contains several netCDF files with different variables.
Here is a simplified example:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
304624171 | MDU6SXNzdWUzMDQ2MjQxNzE= | 1985 | Load a small subset of data from a big dataset takes forever | malmans2 22245117 | closed | 0 | 8 | 2018-03-13T04:27:58Z | 2019-01-13T01:46:08Z | 2019-01-13T01:46:08Z | CONTRIBUTOR | Code Sample```python def cut_dataset(ds2cut, varList = ['Temp' 'S' 'Eta' 'U' 'V' 'W'], lonRange = [-180, 180], latRange = [-90, 90], depthRange = [0, float("inf")], timeRange = ['2007-09-01T00', '2008-08-31T18'], timeFreq = '1D', sampMethod = 'mean', interpC = True, saveNetCDF = False): """ Cut the dataset """
3D testds_cut, grid_cut = cut_dataset(ds, varList = ['Eta'], latRange = [65, 65.5], depthRange = [0, 2], timeRange = ['2007-11-15T00', '2007-11-16T00'], timeFreq = '1D', sampMethod = 'mean', interpC = False, saveNetCDF = '3Dvariable.nc') 4D testds_cut, grid_cut = cut_dataset(ds, varList = ['Temp'], lonRange = [-30, -29.5], latRange = [65, 65.5], depthRange = [0, 2], timeRange = ['2007-11-15T00', '2007-11-16T00'], timeFreq = '1D', sampMethod = 'mean', interpC = False, saveNetCDF = '4Dvariable.nc') ``` Problem descriptionI'm working with a big dataset. However, most of the time I only need a small subset of data. My idea was to open and concatenate everything with open_mfdataset, and then extract subsets of data using the indexing routines. This approach works very good when I extract 3D variables (just lon, lat, and time), but it fails when I try to extract 4D variables (lon, lat, time, and depth). It doesn't actually fail, but to_netcdf takes forever. When I open a smaller dataset since the very beginning (let's say just November), then I'm also able to extract 4D variables. When I load the sub-dataset after using the indexing routines, does xarray need to read the whole original 4D variable? If yes, then I should probably change my approach and I should open subset of data since the very beginning. If no, am I doing something wrong? Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);