issues
4 rows where state = "closed", type = "issue" and user = 13190237 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
374025325 | MDU6SXNzdWUzNzQwMjUzMjU= | 2511 | Array indexing with dask arrays | ulijh 13190237 | closed | 0 | 20 | 2018-10-25T16:13:11Z | 2023-03-15T02:48:00Z | 2023-03-15T02:48:00Z | CONTRIBUTOR | Code example```python da = xr.DataArray(np.ones((10, 10))).chunk(2) indc = xr.DataArray(np.random.randint(0, 9, 10)).chunk(2) This fails:da[{'dim_1' : indc}].values ``` Problem descriptionIndexing with chunked arrays fails, whereas it's fine with "normal" arrays. In case the indices are the result of a lazy calculation, I would like to continue lazily. Expected OutputI would expect an output just like in the "un-chunked" case: ``` da[{'dim_1' : indc.compute()}].values Returns: array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])``` Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
608974755 | MDU6SXNzdWU2MDg5NzQ3NTU= | 4015 | apply_ufunc gives wrong dtype with dask=parallelized and vectorized=True | ulijh 13190237 | closed | 0 | 2 | 2020-04-29T11:17:48Z | 2020-08-19T06:57:56Z | 2020-08-19T06:57:56Z | CONTRIBUTOR | Applying a function to a data array with MCVE Code Sample```python import numpy as np import xarray as xr def func(x): return np.sum(x ** 2) da = xr.DataArray(np.arange(234).reshape(2,3,4)) da = da + 1j * da da = da.chunk(dict(dim_1=1)) da2 = xr.apply_ufunc( func, da, vectorize=True, dask="parallelized", output_dtypes=[da.dtype], ) assert da2.dtype == da.dtype, "wrong dtype" ``` Expected Output
Problem DescriptionTo me it seems to me that the kwarg VersionsOutput of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.2 (default, Apr 8 2020, 14:31:25) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.6.5-arch3-1 machine: x86_64 processor: byteorder: little LC_ALL: None LANG: de_DE.utf8 LOCALE: de_DE.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3 xarray: 0.15.2.dev47+g33a66d63 pandas: 1.0.3 numpy: 1.18.3 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.10.0 Nio: None zarr: None cftime: 1.1.1.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.12.0 distributed: 2.14.0 matplotlib: 3.2.1 cartopy: 0.17.0 seaborn: 0.10.0 numbagg: None pint: None setuptools: 46.1.3 pip: 20.0.2 conda: None pytest: 5.4.1 IPython: 7.13.0 sphinx: 3.0.2 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
483280810 | MDU6SXNzdWU0ODMyODA4MTA= | 3237 | ``argmax()`` causes dask to compute | ulijh 13190237 | closed | 0 | 4 | 2019-08-21T08:41:20Z | 2019-09-06T23:15:19Z | 2019-09-06T23:15:19Z | CONTRIBUTOR | Problem DescriptionWhile digging for #2511 I found that MCVE Code Sample```python
In [1]: import numpy as np In [2]: class Scheduler: In [3]: scheduler = Scheduler() In [4]: with dask.config.set(scheduler=scheduler): RuntimeError Traceback (most recent call last) ~/src/xarray/xarray/core/common.py in wrapped_func(self, dim, axis, skipna, kwargs) ~/src/xarray/xarray/core/dataarray.py in reduce(self, func, dim, axis, keep_attrs, keepdims, kwargs) ~/src/xarray/xarray/core/variable.py in reduce(self, func, dim, axis, keep_attrs, keepdims, allow_lazy, kwargs) ~/src/xarray/xarray/core/duck_array_ops.py in f(values, axis, skipna, kwargs) ~/src/xarray/xarray/core/nanops.py in nanargmax(a, axis) /usr/lib/python3.7/site-packages/dask/array/core.py in bool(self) /usr/lib/python3.7/site-packages/dask/base.py in compute(self, kwargs) /usr/lib/python3.7/site-packages/dask/base.py in compute(args, kwargs) Expected OutputNone of the methods should actually compute:
``` python
Total number of computes: 0 Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
260279615 | MDU6SXNzdWUyNjAyNzk2MTU= | 1591 | indexing/groupby fails on array opened with chunks from netcdf | ulijh 13190237 | closed | 0 | 2 | 2017-09-25T13:37:43Z | 2017-09-26T08:15:45Z | 2017-09-26T05:36:26Z | CONTRIBUTOR | Hi, since the last update of dask (to version 0.15.3), iterating over a groupby object and indexing using np.int64 fails, when the DataArray was opend with chunks from netcdf. I'm using xarray version 0.9.6 and the 'h5netcdf'-engin for reading/writing. To reproduce: ``` import xarray as xr arr = xr.DataArray(np.random.rand(2, 3, 4), dims=['one', 'two', 'three']) arr.to_netcdf('test.nc', engine='h5netcdf') arr_disk = xr.open_dataarray('test.nc', engine='h5netcdf', chunks=dict(one=1)) This produces the error:[g for g in arr_disk.groupby('one')] /usr/lib/python3.6/site-packages/xarray/core/groupby.py in _iter_grouped(self) 296 """Iterate over each element in this group""" 297 for indices in self._group_indices: --> 298 yield self._obj.isel(**{self._group_dim: indices}) 299 300 def _infer_concat_args(self, applied_example): /usr/lib/python3.6/site-packages/xarray/core/dataarray.py in isel(self, drop, indexers) 677 DataArray.sel 678 """ --> 679 ds = self._to_temp_dataset().isel(drop=drop, indexers) 680 return self._from_temp_dataset(ds) 681 /usr/lib/python3.6/site-packages/xarray/core/dataset.py in isel(self, drop, indexers) 1141 for name, var in iteritems(self._variables): 1142 var_indexers = dict((k, v) for k, v in indexers if k in var.dims) -> 1143 new_var = var.isel(var_indexers) 1144 if not (drop and name in var_indexers): 1145 variables[name] = new_var /usr/lib/python3.6/site-packages/xarray/core/variable.py in isel(self, **indexers) 568 if dim in indexers: 569 key[i] = indexers[dim] --> 570 return self[tuple(key)] 571 572 def squeeze(self, dim=None): /usr/lib/python3.6/site-packages/xarray/core/variable.py in getitem(self, key) 398 dims = tuple(dim for k, dim in zip(key, self.dims) 399 if not isinstance(k, integer_types)) --> 400 values = self._indexable_data[key] 401 # orthogonal indexing should ensure the dimensionality is consistent 402 if hasattr(values, 'ndim'): /usr/lib/python3.6/site-packages/xarray/core/indexing.py in getitem(self, key) 496 value = value[(slice(None),) * axis + (subkey,)] 497 else: --> 498 value = self.array[key] 499 return value 500 /home/herter/.local/lib/python3.6/site-packages/dask/array/core.py in getitem(self, index) 1220 1221 from .slicing import normalize_index, slice_with_dask_array -> 1222 index2 = normalize_index(index, self.shape) 1223 1224 if any(isinstance(i, Array) for i in index2): /home/herter/.local/lib/python3.6/site-packages/dask/array/slicing.py in normalize_index(idx, shape) 760 idx = idx + (slice(None),) * (len(shape) - n_sliced_dims) 761 if len([i for i in idx if i is not None]) > len(shape): --> 762 raise IndexError("Too many indices for array") 763 764 none_shape = [] IndexError: Too many indices for array
I'm getting the same error when doing Thanks Uli |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);