issues
14 rows where user = 44147817 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2163608564 | I_kwDOAMm_X86A9gv0 | 8802 | Error when using `apply_ufunc` with `datetime64` as output dtype | gcaria 44147817 | open | 0 | 4 | 2024-03-01T15:09:57Z | 2024-05-03T12:19:14Z | CONTRIBUTOR | What happened?When using What did you expect to happen?No response Minimal Complete Verifiable Example```Python import xarray as xr import numpy as np def _fn(arr: np.ndarray, time: np.ndarray) -> np.ndarray: return time[:10] def fn(da: xr.DataArray) -> xr.DataArray: dim_out = "time_cp"
da_fake = xr.DataArray(np.random.rand(5,5,5), coords=dict(x=range(5), y=range(5), time=np.array(['2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04', '2024-01-05'], dtype='datetime64[ns]') )).chunk(dict(x=2,y=2)) fn(da_fake.compute()).compute() # ValueError: Cannot convert from specific units to generic units in NumPy datetimes or timedeltas fn(da_fake).compute() # same errors as above ``` MVCE confirmation
Relevant log output```PythonValueError Traceback (most recent call last) Cell In[211], line 1 ----> 1 fn(da_fake).compute() File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataarray.py:1163, in DataArray.compute(self, kwargs) 1144 """Manually trigger loading of this array's data from disk or a 1145 remote source into memory and return a new array. The original is 1146 left unaltered. (...) 1160 dask.compute 1161 """ 1162 new = self.copy(deep=False) -> 1163 return new.load(kwargs) File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataarray.py:1137, in DataArray.load(self, kwargs) 1119 def load(self, kwargs) -> Self: 1120 """Manually trigger loading of this array's data from disk or a 1121 remote source into memory and return this array. 1122 (...) 1135 dask.compute 1136 """ -> 1137 ds = self._to_temp_dataset().load(**kwargs) 1138 new = self._from_temp_dataset(ds) 1139 self._variable = new._variable File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/dataset.py:853, in Dataset.load(self, kwargs) 850 chunkmanager = get_chunked_array_type(lazy_data.values()) 852 # evaluate all the chunked arrays simultaneously --> 853 evaluated_data = chunkmanager.compute(lazy_data.values(), kwargs) 855 for k, data in zip(lazy_data, evaluated_data): 856 self.variables[k].data = data File /srv/conda/envs/notebook/lib/python3.10/site-packages/xarray/core/daskmanager.py:70, in DaskManager.compute(self, data, kwargs) 67 def compute(self, data: DaskArray, kwargs) -> tuple[np.ndarray, ...]: 68 from dask.array import compute ---> 70 return compute(*data, kwargs) File /srv/conda/envs/notebook/lib/python3.10/site-packages/dask/base.py:628, in compute(traverse, optimize_graph, scheduler, get, args, kwargs) 625 postcomputes.append(x.dask_postcompute()) 627 with shorten_traceback(): --> 628 results = schedule(dsk, keys, kwargs) 630 return repack([f(r, a) for r, (f, a) in zip(results, postcomputes)]) File /srv/conda/envs/notebook/lib/python3.10/site-packages/numpy/lib/function_base.py:2372, in vectorize.call(self, args, kwargs) 2369 self._init_stage_2(args, kwargs) 2370 return self -> 2372 return self._call_as_normal(*args, kwargs) File /srv/conda/envs/notebook/lib/python3.10/site-packages/numpy/lib/function_base.py:2365, in vectorize._call_as_normal(self, args, *kwargs) 2362 vargs = [args[_i] for _i in inds] 2363 vargs.extend([kwargs[_n] for _n in names]) -> 2365 return self._vectorize_call(func=func, args=vargs) File /srv/conda/envs/notebook/lib/python3.10/site-packages/numpy/lib/function_base.py:2446, in vectorize._vectorize_call(self, func, args)
2444 """Vectorized call to File /srv/conda/envs/notebook/lib/python3.10/site-packages/numpy/lib/function_base.py:2506, in vectorize._vectorize_call_with_signature(self, func, args) 2502 outputs = _create_arrays(broadcast_shape, dim_sizes, 2503 output_core_dims, otypes, results) 2505 for output, result in zip(outputs, results): -> 2506 output[index] = result 2508 if outputs is None: 2509 # did not call the function even once 2510 if otypes is None: ValueError: Cannot convert from specific units to generic units in NumPy datetimes or timedeltas ``` Anything else we need to know?No response Environment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1050367667 | PR_kwDOAMm_X84uYCxJ | 5972 | Respect keep_attrs when using `Dataset.set_index` (#4955) | gcaria 44147817 | closed | 0 | 6 | 2021-11-10T22:22:31Z | 2023-10-03T00:09:41Z | 2023-10-03T00:09:41Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5972 |
The original issue was about |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1517575123 | I_kwDOAMm_X85adFvT | 7409 | Implement `DataArray.to_dask_dataframe()` | gcaria 44147817 | closed | 0 | 4 | 2023-01-03T15:44:11Z | 2023-04-28T15:09:31Z | 2023-04-28T15:09:31Z | CONTRIBUTOR | Is your feature request related to a problem?It'd be nice to pass from a chunked DataArray to a dask object directly Describe the solution you'd likeI think something along these lines should work (although a less convoluted way might exist): ```python import dask.dataframe as dkd import xarray as xr def to_dask(da: xr.DataArray) -> Union[dkd.Series, dkd.DataFrame]:
``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1143489702 | I_kwDOAMm_X85EKESm | 6288 | `Dataset.to_zarr()` does not preserve CRS information | gcaria 44147817 | closed | 0 | 6 | 2022-02-18T17:51:02Z | 2022-08-29T23:40:44Z | 2022-03-21T05:19:48Z | CONTRIBUTOR | What happened?When writing a DataArray with CRS information to zarr, after converting it to a Dataset, the CRS is not readable from the zarr file. What did you expect to happen?To be able to retrieve the CRS information from the zarr file. Minimal Complete Verifiable Example```python da = xr.DataArray(np.arange(9).reshape(3,3), coords={'x':range(3), 'y':range(3)} ) da = da.rio.write_crs(4326) da.to_dataset(name='var').to_zarr('var.zarr') xr.open_zarr('var.zarr')['var'].rio.crs == None # returns True ``` Anything else we need to know?I'd be happy to have a look at this if it is indeed a bug. EnvironmentINSTALLED VERSIONScommit: None python: 3.9.0 (default, Jan 17 2022, 21:57:22) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.11.0-1028-aws machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: None xarray: 0.20.1 pandas: 1.3.4 numpy: 1.21.4 scipy: 1.7.3 netCDF4: None pydap: None h5netcdf: None h5py: 3.6.0 Nio: None zarr: 2.11.0 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.10 cfgrib: None iris: None bottleneck: None dask: 2022.01.0 distributed: 2022.01.0 matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: 2021.11.1 cupy: None pint: None sparse: None setuptools: 60.2.0 pip: 21.3.1 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6288/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
963006707 | MDExOlB1bGxSZXF1ZXN0NzA1NzEzMTc1 | 5680 | ENH: Add default fill values for decode_cf | gcaria 44147817 | open | 0 | 8 | 2021-08-06T19:54:05Z | 2022-06-09T14:50:16Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5680 |
This is a work in progress, mostly so that I can ask some clarifying questions. I see that From the issue's conversation, it wasn't clear to me whether an argument should control the use of the default fill value. Since some tests fail now I guess the answer is yes. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1178365524 | I_kwDOAMm_X85GPG5U | 6405 | Docstring of `open_zarr` fails to mention that `decode_coords` could be a string too | gcaria 44147817 | open | 0 | 0 | 2022-03-23T16:30:11Z | 2022-03-23T16:49:14Z | CONTRIBUTOR | What is your issue?The docstring of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6405/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1048012238 | PR_kwDOAMm_X84uQWuF | 5957 | Do not change coordinate inplace when throwing error | gcaria 44147817 | closed | 0 | 2 | 2021-11-08T23:10:22Z | 2021-11-09T20:28:13Z | 2021-11-09T20:28:13Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5957 |
Not the prettiest of fixes, but this goes around the fact that mutable data types (e.g. when |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
945600101 | MDExOlB1bGxSZXF1ZXN0NjkwOTA2NTky | 5611 | Set coord name concat when `concat`ing along a DataArray | gcaria 44147817 | closed | 0 | 4 | 2021-07-15T17:20:54Z | 2021-08-23T17:24:43Z | 2021-08-23T17:00:39Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5611 |
Technically this creates user visible changes, but not sure whether it's worth adding it to the whatsnew. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
973934781 | MDExOlB1bGxSZXF1ZXN0NzE1MzEzMDEz | 5713 | DOC: Remove suggestion to install pytest-xdist in docs | gcaria 44147817 | closed | 0 | 2 | 2021-08-18T18:14:47Z | 2021-08-19T22:16:24Z | 2021-08-19T22:16:19Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5713 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
938905952 | MDExOlB1bGxSZXF1ZXN0Njg1MjA3MTQ5 | 5586 | Accept missing_dims in Variable.transpose and Dataset.transpose | gcaria 44147817 | closed | 0 | 4 | 2021-07-07T13:39:15Z | 2021-07-17T21:03:00Z | 2021-07-17T21:02:59Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5586 |
Regarding https://github.com/pydata/xarray/issues/5550#issuecomment-875040245, inside the for loop only a Variable's dimensions are selected for the transpose, so a dimension that it's missing in all DataArrays would just be ignored silently. Hence it's necessary to check at the beginning of the function. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
891760873 | MDExOlB1bGxSZXF1ZXN0NjQ0NTc3MDMx | 5308 | Move encode expected test failures to own function | gcaria 44147817 | closed | 0 | 1 | 2021-05-14T09:17:30Z | 2021-05-14T10:06:20Z | 2021-05-14T10:06:13Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5308 | A quick fix for a missing commit of PR #5288 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
885038560 | MDExOlB1bGxSZXF1ZXN0NjM4MzgxMzUy | 5288 | Raise error for invalid reference date for encoding time units | gcaria 44147817 | closed | 0 | 7 | 2021-05-10T20:25:23Z | 2021-05-14T09:43:30Z | 2021-05-13T18:27:13Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5288 |
Although the error raised by this commit does not include the whole units string, I believe it is actually more useful and specific since it focuses on the part (reference date) that's actually causing the problem.
Also, the reference date is the only information available in I had a look in Have I missed it? EDIT: I've tried to substitute the error raise on line 129 with a |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
874044225 | MDExOlB1bGxSZXF1ZXN0NjI4Njc2ODUz | 5247 | Add to_pandas method for Dataset | gcaria 44147817 | closed | 0 | 3 | 2021-05-02T21:04:53Z | 2021-05-04T13:56:17Z | 2021-05-04T13:56:00Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5247 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
867046262 | MDExOlB1bGxSZXF1ZXN0NjIyNzkwOTcx | 5216 | Enable using __setitem__ for Dataset using a list as key | gcaria 44147817 | closed | 0 | 8 | 2021-04-25T15:57:48Z | 2021-05-02T20:30:26Z | 2021-05-02T20:29:34Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5216 |
Hi xarray folks, long time user first time contributor here. I believe the tests for this feature should be expanded, so please consider this as a work in progress. Any feedback is greatly appreciated! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);