issues
506 rows where user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, draft, state_reason, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2272299822 | PR_kwDOAMm_X85uL82a | 8989 | Skip flaky `test_open_mfdataset_manyfiles` test | max-sixty 5635139 | closed | 0 | 0 | 2024-04-30T19:24:41Z | 2024-04-30T20:27:04Z | 2024-04-30T19:46:34Z | MEMBER | 0 | pydata/xarray/pulls/8989 | Don't just xfail, and not only on windows, since it can crash the worker |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2271670475 | PR_kwDOAMm_X85uJ5Er | 8988 | Remove `.drop` warning allow | max-sixty 5635139 | closed | 0 | 0 | 2024-04-30T14:39:35Z | 2024-04-30T19:26:17Z | 2024-04-30T19:26:16Z | MEMBER | 0 | pydata/xarray/pulls/8988 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2271652603 | PR_kwDOAMm_X85uJ122 | 8987 | Add notes on when to add ignores to warnings | max-sixty 5635139 | closed | 0 | 0 | 2024-04-30T14:34:52Z | 2024-04-30T14:56:47Z | 2024-04-30T14:56:46Z | MEMBER | 0 | pydata/xarray/pulls/8987 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1250939008 | I_kwDOAMm_X85Kj9CA | 6646 | `dim` vs `dims` | max-sixty 5635139 | closed | 0 | 4 | 2022-05-27T16:15:02Z | 2024-04-29T18:24:56Z | 2024-04-29T18:24:56Z | MEMBER | What is your issue?I've recently been hit with this when experimenting with Should we standardize on one of these? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2268058661 | PR_kwDOAMm_X85t9f5f | 8982 | Switch all methods to `dim` | max-sixty 5635139 | closed | 0 | 0 | 2024-04-29T03:42:34Z | 2024-04-29T18:24:56Z | 2024-04-29T18:24:55Z | MEMBER | 0 | pydata/xarray/pulls/8982 | I think this is the final set of methods
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8982/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2267810980 | PR_kwDOAMm_X85t8q4s | 8981 | Enable ffill for datetimes | max-sixty 5635139 | closed | 0 | 5 | 2024-04-28T20:53:18Z | 2024-04-29T18:09:48Z | 2024-04-28T23:02:11Z | MEMBER | 0 | pydata/xarray/pulls/8981 | Notes inline. Would fix #4587 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2262478932 | PR_kwDOAMm_X85tqpUi | 8974 | Raise errors on new warnings from within xarray | max-sixty 5635139 | closed | 0 | 2 | 2024-04-25T01:50:48Z | 2024-04-29T12:18:42Z | 2024-04-29T02:50:21Z | MEMBER | 0 | pydata/xarray/pulls/8974 | Notes are inline.
Done with some help from an LLM — quite good for doing tedious tasks that we otherwise wouldn't want to do — can paste in all the warnings output and get a decent start on rules for exclusions |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1997537503 | PR_kwDOAMm_X85fqp3A | 8459 | Check for aligned chunks when writing to existing variables | max-sixty 5635139 | closed | 0 | 5 | 2023-11-16T18:56:06Z | 2024-04-29T03:05:36Z | 2024-03-29T14:35:50Z | MEMBER | 0 | pydata/xarray/pulls/8459 | While I don't feel super confident that this is designed to protect against any bugs, it does solve the immediate problem in #8371, by hoisting the encoding check above the code that runs for only new variables. The encoding check is somewhat implicit, so this was an easy thing to miss prior.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2244681150 | PR_kwDOAMm_X85suxIl | 8947 | Add mypy to dev dependencies | max-sixty 5635139 | closed | 0 | 0 | 2024-04-15T21:39:19Z | 2024-04-17T16:39:23Z | 2024-04-17T16:39:22Z | MEMBER | 0 | pydata/xarray/pulls/8947 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1960332384 | I_kwDOAMm_X8502Exg | 8371 | Writing to regions with unaligned chunks can lose data | max-sixty 5635139 | closed | 0 | 20 | 2023-10-25T01:17:59Z | 2024-03-29T14:35:51Z | 2024-03-29T14:35:51Z | MEMBER | What happened?Writing with I've recreated an example below. While it's unlikely that folks are passing different values to (FWIW, this was fairly painful, and I managed to lose a lot of time by not noticing this, and then not really considering this could happen as I was trying to debug. I think we should really strive to ensure that we don't lose data / incorrectly report that we've successfully written data...) What did you expect to happen?If there's a risk of data loss, raise an error... Minimal Complete Verifiable Example```Python ds = xr.DataArray(np.arange(120).reshape(4,3,-1),dims=list("abc")).rename('var1').to_dataset().chunk(2) ds <xarray.Dataset>Dimensions: (a: 4, b: 3, c: 10)Dimensions without coordinates: a, b, cData variables:var1 (a, b, c) int64 dask.array<chunksize=(2, 2, 2), meta=np.ndarray>def write(ds): ds.chunk(5).to_zarr('foo.zarr', compute=False, mode='w') for r in (range(ds.sizes['a'])): ds.chunk(3).isel(a=[r]).to_zarr('foo.zarr', region=dict(a=slice(r, r+1))) def read(ds): result = xr.open_zarr('foo.zarr') assert result.compute().identical(ds) print(result.chunksizes, ds.chunksizes) write(ds); read(ds) AssertionErrorxr.open_zarr('foo.zarr').compute()['var1'] <xarray.DataArray 'var1' (a: 4, b: 3, c: 10)> array([[[ 0, 0, 0, 3, 4, 5, 0, 0, 0, 9], [ 0, 0, 0, 13, 14, 15, 0, 0, 0, 19], [ 0, 0, 0, 23, 24, 25, 0, 0, 0, 29]],
Dimensions without coordinates: a, b, c ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: ccc8f9987b553809fb6a40c52fa1a8a8095c8c5f
python: 3.9.18 (main, Aug 24 2023, 21:19:58)
[Clang 14.0.3 (clang-1403.0.22.14.1)]
python-bits: 64
OS: Darwin
OS-release: 22.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2023.10.2.dev10+gccc8f998
pandas: 2.1.1
numpy: 1.25.2
scipy: 1.11.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.16.0
cftime: None
nc_time_axis: None
PseudoNetCDF: None
iris: None
bottleneck: None
dask: 2023.4.0
distributed: 2023.7.1
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: 0.2.3.dev30+gd26e29e
fsspec: 2021.11.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: 0.9.19
setuptools: 68.1.2
pip: 23.2.1
conda: None
pytest: 7.4.0
mypy: 1.6.0
IPython: 8.15.0
sphinx: 4.3.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2110888925 | I_kwDOAMm_X8590Zvd | 8690 | Add `nbytes` to repr? | max-sixty 5635139 | closed | 0 | 9 | 2024-01-31T20:13:59Z | 2024-02-19T22:18:47Z | 2024-02-07T20:47:38Z | MEMBER | Is your feature request related to a problem?Would having the I frequently find myself logging this separately. For example:
Describe the solution you'd likeNo response Describe alternatives you've consideredStatus quo :) Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8690/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2128692061 | PR_kwDOAMm_X85mkDqu | 8735 | Remove fsspec exclusion from 2021 | max-sixty 5635139 | closed | 0 | 1 | 2024-02-10T19:43:14Z | 2024-02-11T00:19:30Z | 2024-02-11T00:19:29Z | MEMBER | 0 | pydata/xarray/pulls/8735 | Presumably no longer needed |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2128687154 | PR_kwDOAMm_X85mkCum | 8734 | Silence dask doctest warning | max-sixty 5635139 | closed | 0 | 0 | 2024-02-10T19:25:47Z | 2024-02-10T23:44:24Z | 2024-02-10T23:44:24Z | MEMBER | 0 | pydata/xarray/pulls/8734 | Closes #8732. Not the most elegant implementation but it's only temporary |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1920361792 | PR_kwDOAMm_X85bl988 | 8258 | Add a `.drop_attrs` method | max-sixty 5635139 | open | 0 | 9 | 2023-09-30T18:42:12Z | 2024-02-09T18:49:22Z | MEMBER | 0 | pydata/xarray/pulls/8258 | Part of #3891 ~Do we think this is a good idea? I'll add docs & tests if so...~ Ready to go, just needs agreement on whether it's good |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8258/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2126375172 | I_kwDOAMm_X85-vekE | 8726 | PRs requiring approval & merging main? | max-sixty 5635139 | closed | 0 | 4 | 2024-02-09T02:35:58Z | 2024-02-09T18:23:52Z | 2024-02-09T18:21:59Z | MEMBER | What is your issue?Sorry I haven't been on the calls at all recently (unfortunately the schedule is difficult for me). Maybe this was discussed there? PRs now seem to require a separate approval prior to merging. Is there an upside to this? Is there any difference between those who can approve and those who can merge? Otherwise it just seems like more clicking. PRs also now seem to require merging the latest main prior to merging? I get there's some theoretical value to this, because changes can semantically conflict with each other. But it's extremely rare that this actually happens (can we point to cases?), and it limits the immediacy & throughput of PRs. If the bad outcome does ever happen, we find out quickly when main tests fail and can revert. (fwiw I wrote a few principles around this down a while ago here; those are much stronger than what I'm suggesting in this issue though) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2126095122 | PR_kwDOAMm_X85mbRG7 | 8724 | Switch `.dt` to raise an `AttributeError` | max-sixty 5635139 | closed | 0 | 0 | 2024-02-08T21:26:06Z | 2024-02-09T02:21:47Z | 2024-02-09T02:21:46Z | MEMBER | 0 | pydata/xarray/pulls/8724 | Discussion at #8718 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8724/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1984961987 | I_kwDOAMm_X852UB3D | 8432 | Writing a datetime coord ignores chunks | max-sixty 5635139 | closed | 0 | 5 | 2023-11-09T07:00:39Z | 2024-01-29T19:12:33Z | 2024-01-29T19:12:33Z | MEMBER | What happened?When writing a coord with a datetime type, the chunking on the coord is ignored, and the whole coord is written as a single chunk. (or at least it can be, I haven't done enough to confirm whether it'll always be...) This can be quite inconvenient. Any attempt to write to that dataset from a distributed process will have errors, since each process will be attempting to write another process's data, rather than only its region. And less severely, the chunks won't be unified. Minimal Complete Verifiable Example```Python ds = xr.tutorial.load_dataset('air_temperature') ( ds.chunk() .expand_dims(a=1000) .assign_coords( time2=lambda x: x.time, time_int=lambda x: (("time"), np.full(ds.sizes["time"], 1)), ) .chunk(time=10) .to_zarr("foo.zarr", mode="w") ) xr.open_zarr('foo.zarr') Note the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2099077744 | PR_kwDOAMm_X85k_vqU | 8661 | Add `dev` dependencies to `pyproject.toml` | max-sixty 5635139 | closed | 0 | 1 | 2024-01-24T20:48:55Z | 2024-01-25T06:24:37Z | 2024-01-25T06:24:36Z | MEMBER | 0 | pydata/xarray/pulls/8661 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2097231358 | PR_kwDOAMm_X85k5dSd | 8648 | xfail another test on windows | max-sixty 5635139 | closed | 0 | 0 | 2024-01-24T01:04:01Z | 2024-01-24T01:23:26Z | 2024-01-24T01:23:26Z | MEMBER | 0 | pydata/xarray/pulls/8648 | As ever, very open to approaches to fix these. But unless we can fix them, xfailing them seems like the most reasonable solution |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2089331658 | PR_kwDOAMm_X85keyUs | 8624 | Use ddof in `numbagg>=0.7.0` for aggregations | max-sixty 5635139 | closed | 0 | 0 | 2024-01-19T00:23:15Z | 2024-01-23T02:25:39Z | 2024-01-23T02:25:38Z | MEMBER | 0 | pydata/xarray/pulls/8624 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2094956413 | PR_kwDOAMm_X85kxwAk | 8643 | xfail zarr test on Windows | max-sixty 5635139 | closed | 0 | 0 | 2024-01-22T23:24:12Z | 2024-01-23T00:40:29Z | 2024-01-23T00:40:28Z | MEMBER | 0 | pydata/xarray/pulls/8643 | I see this failing quite a lot of the time... Ofc open to a proper solution but in the meantime setting this to xfail |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2092299525 | PR_kwDOAMm_X85kozmg | 8630 | Use `T_DataArray` in `Weighted` | max-sixty 5635139 | closed | 0 | 0 | 2024-01-21T01:18:14Z | 2024-01-22T04:28:07Z | 2024-01-22T04:28:07Z | MEMBER | 0 | pydata/xarray/pulls/8630 | Allows subtypes. (I had this in my git stash, so commiting it...) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2092855603 | PR_kwDOAMm_X85kqlH4 | 8639 | Silence deprecation warning from `.dims` in tests | max-sixty 5635139 | closed | 0 | 1 | 2024-01-22T00:25:07Z | 2024-01-22T02:04:54Z | 2024-01-22T02:04:53Z | MEMBER | 0 | pydata/xarray/pulls/8639 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2092790802 | PR_kwDOAMm_X85kqX8y | 8637 | xfail a cftime test | max-sixty 5635139 | closed | 0 | 0 | 2024-01-21T21:43:59Z | 2024-01-21T22:00:59Z | 2024-01-21T22:00:58Z | MEMBER | 0 | pydata/xarray/pulls/8637 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2092777417 | PR_kwDOAMm_X85kqVIH | 8636 | xfail another dask/pyarrow test | max-sixty 5635139 | closed | 0 | 1 | 2024-01-21T21:26:19Z | 2024-01-21T21:42:22Z | 2024-01-21T21:42:21Z | MEMBER | 0 | pydata/xarray/pulls/8636 | Unsure why this wasn't showing prior -- having tests fail in the good state does make it much more difficult to ensure everything is fixed before merging. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2089351473 | PR_kwDOAMm_X85ke2qd | 8625 | Don't show stdlib paths for `user_level_warnings` | max-sixty 5635139 | closed | 0 | 0 | 2024-01-19T00:45:14Z | 2024-01-21T21:08:40Z | 2024-01-21T21:08:39Z | MEMBER | 0 | pydata/xarray/pulls/8625 | Was previously seeing:
Now:
It's a heuristic, so not perfect, but I think very likely to be accurate. Any contrary cases very welcome... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8625/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2092762468 | PR_kwDOAMm_X85kqSLW | 8635 | xfail pyarrow test | max-sixty 5635139 | closed | 0 | 0 | 2024-01-21T20:42:50Z | 2024-01-21T21:03:35Z | 2024-01-21T21:03:34Z | MEMBER | 0 | pydata/xarray/pulls/8635 | Sorry for the repeated PR -- some tests passed but some failed without pyarrow installed. So this xfails the test for the moment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2092747686 | PR_kwDOAMm_X85kqPTB | 8634 | Workaround broken test from pyarrow | max-sixty 5635139 | closed | 0 | 0 | 2024-01-21T20:01:51Z | 2024-01-21T20:18:23Z | 2024-01-21T20:18:22Z | MEMBER | 0 | pydata/xarray/pulls/8634 | While fixing the previous issue, I introduced another (but didn't see it because of the errors from the test suite, probably should have looked closer...) This doesn't fix the behavior, but I think it's minor so fine to push off. I do prioritize getting the tests where pass vs failure is meaningful again |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2092300888 | PR_kwDOAMm_X85koz3r | 8631 | Partially fix doctests | max-sixty 5635139 | closed | 0 | 1 | 2024-01-21T01:25:02Z | 2024-01-21T01:33:43Z | 2024-01-21T01:31:46Z | MEMBER | 0 | pydata/xarray/pulls/8631 | Currently getting a error without pyarrow in CI: https://github.com/pydata/xarray/actions/runs/7577666145/job/20693665924 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1923361961 | I_kwDOAMm_X85ypCyp | 8263 | Surprising `.groupby` behavior with float index | max-sixty 5635139 | closed | 0 | 0 | 2023-10-03T05:50:49Z | 2024-01-08T01:05:25Z | 2024-01-08T01:05:25Z | MEMBER | What is your issue?We raise an error on grouping without supplying dims, but not for float indexes — is this intentional or an oversight?
```python da = xr.tutorial.open_dataset("air_temperature")['air'] da.drop_vars('lat').groupby('lat').sum() ``` ```ValueError Traceback (most recent call last) Cell In[8], line 1 ----> 1 da.drop_vars('lat').groupby('lat').sum() ... ValueError: cannot reduce over dimensions ['lat']. expected either '...' to reduce over all dimensions or one or more of ('time', 'lon'). ``` But with a float index, we don't raise:
...returns the original array:
And if we try this with a non-float index, we get the error again:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1916677049 | I_kwDOAMm_X85yPiu5 | 8245 | Tools for writing distributed zarrs | max-sixty 5635139 | open | 0 | 0 | 2023-09-28T04:25:45Z | 2024-01-04T00:15:09Z | MEMBER | What is your issue?There seems to be a common pattern for writing zarrs from a distributed set of machines, in parallel. It's somewhat described in the prose of the io docs. Quoting:
I've been using this fairly successfully recently. It's much better than writing hundreds or thousands of data variables, since many small data variables create a huge number of files. Are there some tools we can provide to make this easier? Some ideas:
- [ ]
More minor papercuts:
- [ ] I've hit an issue where writing a region seemed to cause the worker to attempt to load the whole array into memory — can we offer guarantees for when (non-metadata) data will be loaded during Some things that were in the list here, as they've been completed!!
- [x] Requiring |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8245/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
xarray 13221727 | issue | ||||||||
1975574237 | I_kwDOAMm_X851wN7d | 8409 | Task graphs on `.map_blocks` with many chunks can be huge | max-sixty 5635139 | closed | 0 | 6 | 2023-11-03T07:14:45Z | 2024-01-03T04:10:16Z | 2024-01-03T04:10:16Z | MEMBER | What happened?I'm getting task graphs > 1GB, I think possibly because the full indexes are being included in every task? What did you expect to happen?Only the relevant sections of the index would be included Minimal Complete Verifiable Example```Python da = xr.tutorial.load_dataset('air_temperature') Dropping the index doesn't generally matter that much...len(cloudpickle.dumps(da.chunk(lat=1, lon=1))) 15569320len(cloudpickle.dumps(da.chunk().drop_vars(da.indexes))) 15477313But with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2052840951 | I_kwDOAMm_X856W933 | 8566 | Use `ddof=1` for `std` & `var` | max-sixty 5635139 | open | 0 | 2 | 2023-12-21T17:47:21Z | 2023-12-27T16:58:46Z | MEMBER | What is your issue?I've discussed this a bunch with @dcherian (though I'm not sure he necessarily agrees, I'll let him comment) Currently xarray uses OTOH: - It is consistent with numpy - It wouldn't be a painless change — folks who don't read deprecation messages would see values change very slightly Any thoughts? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
988158051 | MDU6SXNzdWU5ODgxNTgwNTE= | 5764 | Implement __sizeof__ on objects? | max-sixty 5635139 | open | 0 | 6 | 2021-09-03T23:36:53Z | 2023-12-19T18:23:08Z | MEMBER | Is your feature request related to a problem? Please describe.
Currently But Describe the solution you'd like
If we implement I think that would be something like |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5764/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
reopened | xarray 13221727 | issue | |||||||
2033367994 | PR_kwDOAMm_X85hj9np | 8533 | Offer a fixture for unifying DataArray & Dataset tests | max-sixty 5635139 | closed | 0 | 2 | 2023-12-08T22:06:28Z | 2023-12-18T21:30:41Z | 2023-12-18T21:30:40Z | MEMBER | 0 | pydata/xarray/pulls/8533 | Some tests are literally copy & pasted between DataArray & Dataset tests. This change allows them to use a single test. Not everything will be able to use this — sometimes we want to check specifics — but some will — I've change the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1977661256 | I_kwDOAMm_X8514LdI | 8414 | Is there any way of having `.map_blocks` be even more opaque to dask? | max-sixty 5635139 | closed | 0 | 23 | 2023-11-05T06:56:43Z | 2023-12-12T18:14:57Z | 2023-12-12T18:14:57Z | MEMBER | Is your feature request related to a problem?Currently I have a workload which does something a bit like:
(the actual calc is a bit more complicated! And while I don't have a MVCE of the full calc, I pasted a task graph below) Dask — while very impressive in many ways — handles this extremely badly, because it attempts to load the whole of Describe the solution you'd likeI was hoping to make the internals of this task opaque to dask, so it became a much dumber task runner — just map over the blocks, running the function and writing the result, block by block. I thought I had some success with Is there any way to make the write more opaque too? Describe alternatives you've consideredI've built a homegrown thing which is really hacky which does this on a custom scheduler — just runs the functions and writes with Additional context(It's also possible I'm making some basic error — and I do remember it working much better last week — so please feel free to direct me / ask me for more examples, if this doesn't ring true) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2034575163 | PR_kwDOAMm_X85hn4Pn | 8539 | Filter out doctest warning | max-sixty 5635139 | closed | 0 | 11 | 2023-12-10T23:11:36Z | 2023-12-12T06:37:54Z | 2023-12-11T21:00:01Z | MEMBER | 0 | pydata/xarray/pulls/8539 | Trying to fix #8537. Not sure it'll work and can't test locally so seeing if it passes CI |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2036491126 | PR_kwDOAMm_X85hud-m | 8543 | Fix incorrect indent | max-sixty 5635139 | closed | 0 | 0 | 2023-12-11T20:41:32Z | 2023-12-11T20:43:26Z | 2023-12-11T20:43:09Z | MEMBER | 0 | pydata/xarray/pulls/8543 | edit: my mistake, this is intended |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
866826033 | MDU6SXNzdWU4NjY4MjYwMzM= | 5215 | Add an Cumulative aggregation, similar to Rolling | max-sixty 5635139 | closed | 0 | 6 | 2021-04-24T19:59:49Z | 2023-12-08T22:06:53Z | 2023-12-08T22:06:53Z | MEMBER | Is your feature request related to a problem? Please describe. Pandas has a Describe the solution you'd like
Basically the same as pandas — a Describe alternatives you've considered Some options: – This – Don't add anything, the sugar isn't worth the additional API. – Go full out and write specialized expanding algos — which will be faster since they don't have to keep track of the window. But not that much faster, likely not worth the effort. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2022202767 | PR_kwDOAMm_X85g97hj | 8512 | Add Cumulative aggregation | max-sixty 5635139 | closed | 0 | 1 | 2023-12-02T21:03:13Z | 2023-12-08T22:06:53Z | 2023-12-08T22:06:52Z | MEMBER | 0 | pydata/xarray/pulls/8512 | Closes #5215 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2019645081 | I_kwDOAMm_X854YVaZ | 8498 | Allow some notion of ordering in Dataset dims | max-sixty 5635139 | closed | 0 | 5 | 2023-11-30T22:57:23Z | 2023-12-08T19:22:56Z | 2023-12-08T19:22:55Z | MEMBER | What is your issue?Currently a Do we gain anything from have unordered dims in a Dataset? Could we have an ordering without enforcing it on every variable? Here's one proposal, with fairly wide error-bars:
- Datasets have a dim order, which is set at construction time or through What do folks think? [^1]: though also we could do this in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
not_planned | xarray 13221727 | issue | ||||||
2026963757 | I_kwDOAMm_X8540QMt | 8522 | Test failures on `main` | max-sixty 5635139 | closed | 0 | 7 | 2023-12-05T19:22:01Z | 2023-12-06T18:48:24Z | 2023-12-06T17:28:13Z | MEMBER | What is your issue?Any ideas what could be causing these? I can't immediately reproduce locally. https://github.com/pydata/xarray/actions/runs/7105414268/job/19342564583 ``` Error: TestDataArray.test_computation_objects[int64-method_groupby_bins-data] AssertionError: Left and right DataArray objects are not close Differing values: L <Quantity([[ nan nan 1. 1. ] [2. 2. 3. 3. ] [4. 4. 5. 5. ] [6. 6. 7. 7. ] [8. 8. 9. 9.333333]], 'meter')> R <Quantity([[0. 0. 1. 1. ] [2. 2. 3. 3. ] [4. 4. 5. 5. ] [6. 6. 7. 7. ] [8. 8. 9. 9.333333]], 'meter')> ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8522/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1192478248 | I_kwDOAMm_X85HE8Yo | 6440 | Add `eval`? | max-sixty 5635139 | closed | 0 | 0 | 2022-04-05T00:57:00Z | 2023-12-06T17:52:47Z | 2023-12-06T17:52:47Z | MEMBER | Is your feature request related to a problem?We currently have Describe the solution you'd likeShould we add an Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1410303926 | PR_kwDOAMm_X85A3Xqk | 7163 | Add `eval` method to Dataset | max-sixty 5635139 | closed | 0 | 3 | 2022-10-15T22:12:23Z | 2023-12-06T17:52:47Z | 2023-12-06T17:52:46Z | MEMBER | 0 | pydata/xarray/pulls/7163 | This needs proper tests & docs, but would this be a good idea? A couple of examples are in the docstring. It's mostly just deferring to pandas' excellent
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7163/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
xarray 13221727 | pull | |||||
2019309352 | PR_kwDOAMm_X85g0KvI | 8493 | Use numbagg for `rolling` methods | max-sixty 5635139 | closed | 0 | 3 | 2023-11-30T18:52:08Z | 2023-12-05T19:08:32Z | 2023-12-05T19:08:31Z | MEMBER | 0 | pydata/xarray/pulls/8493 | A couple of tests are failing for the multi-dimensional case, which I'll fix before merge. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8493/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
907845790 | MDU6SXNzdWU5MDc4NDU3OTA= | 5413 | Does the PyPI release job fire twice for each release? | max-sixty 5635139 | closed | 0 | 2 | 2021-06-01T04:01:17Z | 2023-12-04T19:22:32Z | 2023-12-04T19:22:32Z | MEMBER | I was attempting to copy the great work here for numbagg and spotted this! Do we fire twice for each release? Maybe that's fine though? https://github.com/pydata/xarray/actions/workflows/pypi-release.yaml |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
929840699 | MDU6SXNzdWU5Mjk4NDA2OTk= | 5531 | Keyword only args for arguments like "drop" | max-sixty 5635139 | closed | 0 | 12 | 2021-06-25T05:24:25Z | 2023-12-04T19:22:24Z | 2023-12-04T19:22:23Z | MEMBER | Is your feature request related to a problem? Please describe. A method like This means that passing Describe the solution you'd like
Move to kwarg-only arguments for these; like But we probably need a deprecation cycle, which will require some work. Describe alternatives you've considered Not have a deprecation cycle? I imagine it's fairly rare to not pass the kwarg. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1165654699 | I_kwDOAMm_X85Fenqr | 6349 | Rolling exp correlation | max-sixty 5635139 | closed | 0 | 1 | 2022-03-10T19:51:57Z | 2023-12-04T19:13:35Z | 2023-12-04T19:13:34Z | MEMBER | Is your feature request related to a problem?I'd like an exponentially moving correlation coefficient Describe the solution you'd likeI think we could add a We could also add a flag for cosine similarity, which wouldn't remove the mean. We could also add I think we'd need to mask the variables on their intersection, so we don't have values that are missing from B affecting A's variance without affecting its covariance. Pandas does this in cython, possibly because it's faster to only do a single pass of the data. If anyone has correctness concerns about this simple approach of wrapping Describe alternatives you've consideredNumagg Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2019577432 | PR_kwDOAMm_X85g1F3A | 8495 | Fix type of `.assign_coords` | max-sixty 5635139 | closed | 0 | 1 | 2023-11-30T21:57:58Z | 2023-12-04T19:11:57Z | 2023-12-04T19:11:55Z | MEMBER | 0 | pydata/xarray/pulls/8495 | As discussed in #8455 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1995489227 | I_kwDOAMm_X8528L_L | 8455 | Errors when assigning using `.from_pandas_multiindex` | max-sixty 5635139 | closed | 0 | 3 | 2023-11-15T20:09:15Z | 2023-12-04T19:10:12Z | 2023-12-04T19:10:11Z | MEMBER | What happened?Very possibly this is user-error, forgive me if so. I'm trying to transition some code from the previous assignment of MultiIndexes, to the new world. Here's an MCVE: What did you expect to happen?No response Minimal Complete Verifiable Example```Python da = xr.tutorial.open_dataset("air_temperature")['air'] old code, works, but with a warningda.expand_dims('foo').assign_coords(foo=(pd.MultiIndex.from_tuples([(1,2)]))) <ipython-input-25-f09b7f52bb42>:1: FutureWarning: the new code — seems to get confused between the number of values in the index — 1 — and the number of levels — 3 including the parent:da.expand_dims('foo').assign_coords(foo=xr.Coordinates.from_pandas_multiindex(pd.MultiIndex.from_tuples([(1,2)]), dim='foo'))ValueError Traceback (most recent call last) Cell In[26], line 1 ----> 1 da.expand_dims('foo').assign_coords(foo=xr.Coordinates.from_pandas_multiindex(pd.MultiIndex.from_tuples([(1,2)]), dim='foo')) File ~/workspace/xarray/xarray/core/common.py:621, in DataWithCoords.assign_coords(self, coords, **coords_kwargs) 618 else: 619 results = self._calc_assign_results(coords_combined) --> 621 data.coords.update(results) 622 return data File ~/workspace/xarray/xarray/core/coordinates.py:566, in Coordinates.update(self, other) 560 # special case for PandasMultiIndex: updating only its dimension coordinate 561 # is still allowed but depreciated. 562 # It is the only case where we need to actually drop coordinates here (multi-index levels) 563 # TODO: remove when removing PandasMultiIndex's dimension coordinate. 564 self._drop_coords(self._names - coords_to_align._names) --> 566 self._update_coords(coords, indexes) File ~/workspace/xarray/xarray/core/coordinates.py:834, in DataArrayCoordinates._update_coords(self, coords, indexes) 832 coords_plus_data = coords.copy() 833 coords_plus_data[_THIS_ARRAY] = self._data.variable --> 834 dims = calculate_dimensions(coords_plus_data) 835 if not set(dims) <= set(self.dims): 836 raise ValueError( 837 "cannot add coordinates with new dimensions to a DataArray" 838 ) File ~/workspace/xarray/xarray/core/variable.py:3014, in calculate_dimensions(variables) 3012 last_used[dim] = k 3013 elif dims[dim] != size: -> 3014 raise ValueError( 3015 f"conflicting sizes for dimension {dim!r}: " 3016 f"length {size} on {k!r} and length {dims[dim]} on {last_used!r}" 3017 ) 3018 return dims ValueError: conflicting sizes for dimension 'foo': length 1 on <this-array> and length 3 on {'lat': 'lat', 'lon': 'lon', 'time': 'time', 'foo': 'foo'} ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.18 (main, Nov 2 2023, 16:51:22)
[Clang 14.0.3 (clang-1403.0.22.14.1)]
python-bits: 64
OS: Darwin
OS-release: 22.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2023.10.2.dev10+gccc8f998
pandas: 2.1.1
numpy: 1.25.2
scipy: 1.11.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.16.0
cftime: None
nc_time_axis: None
PseudoNetCDF: None
iris: None
bottleneck: None
dask: 2023.4.0
distributed: 2023.7.1
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: 0.2.3.dev30+gd26e29e
fsspec: 2021.11.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: 0.9.19
setuptools: 68.2.2
pip: 23.3.1
conda: None
pytest: 7.4.0
mypy: 1.6.0
IPython: 8.15.0
sphinx: 4.3.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
not_planned | xarray 13221727 | issue | ||||||
2022178394 | PR_kwDOAMm_X85g92vo | 8511 | Allow callables to `.drop_vars` | max-sixty 5635139 | closed | 0 | 0 | 2023-12-02T19:39:53Z | 2023-12-03T22:04:53Z | 2023-12-03T22:04:52Z | MEMBER | 0 | pydata/xarray/pulls/8511 | This can be used as a nice more general alternative to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2021810083 | PR_kwDOAMm_X85g8r6c | 8508 | Implement `np.clip` as `__array_function__` | max-sixty 5635139 | closed | 0 | 2 | 2023-12-02T02:20:11Z | 2023-12-03T05:27:38Z | 2023-12-03T05:27:33Z | MEMBER | 0 | pydata/xarray/pulls/8508 | Would close https://github.com/pydata/xarray/issues/2570 Because of https://numpy.org/neps/nep-0018-array-function-protocol.html#partial-implementation-of-numpy-s-api, no option is ideal:
- Don't do anything — don't implement @shoyer is this summary accurate? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2019642778 | PR_kwDOAMm_X85g1URY | 8497 | Fully deprecate `.drop` | max-sixty 5635139 | closed | 0 | 0 | 2023-11-30T22:54:57Z | 2023-12-02T05:52:50Z | 2023-12-02T05:52:49Z | MEMBER | 0 | pydata/xarray/pulls/8497 | I think it's time... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8497/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2013544848 | PR_kwDOAMm_X85ggbU0 | 8487 | Start renaming `dims` to `dim` | max-sixty 5635139 | closed | 0 | 1 | 2023-11-28T03:25:40Z | 2023-11-28T21:04:49Z | 2023-11-28T21:04:48Z | MEMBER | 0 | pydata/xarray/pulls/8487 | Begins the process of #6646. I don't think it's feasible / enjoyable to do this for everything at once, so I would suggest we do it gradually, while keeping the warnings quite quiet, so by the time we convert to louder warnings, users can do a find/replace easily. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2010795504 | PR_kwDOAMm_X85gXOqo | 8484 | Fix Zarr region transpose | max-sixty 5635139 | closed | 0 | 3 | 2023-11-25T21:01:28Z | 2023-11-27T20:56:57Z | 2023-11-27T20:56:56Z | MEMBER | 0 | pydata/xarray/pulls/8484 | This wasn't working on an unregion-ed write; I think because |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2010797682 | PR_kwDOAMm_X85gXPEM | 8485 | Refine rolling_exp error messages | max-sixty 5635139 | closed | 0 | 0 | 2023-11-25T21:09:52Z | 2023-11-25T21:55:20Z | 2023-11-25T21:55:20Z | MEMBER | 0 | pydata/xarray/pulls/8485 | (Sorry, copy & pasted too liberally!) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1966733834 | PR_kwDOAMm_X85eCSac | 8389 | Use numbagg for `ffill` by default | max-sixty 5635139 | closed | 0 | 5 | 2023-10-28T20:40:13Z | 2023-11-25T21:06:10Z | 2023-11-25T21:06:09Z | MEMBER | 0 | pydata/xarray/pulls/8389 | The main perf advantage here is the array doesn't need to be unstacked & stacked, which is a huge win for large multi-dimensional arrays... (I actually was hitting a memory issue running an We could move these methods to For transparency — the logic of "check for numbagg, check for bottleneck" I wouldn't rate at my most confident. But I'm more confident that just installing numbagg will work. And if that works well enough, we could consider only supporting numbagg for some of these in the future. I also haven't done the benchmarks here — though the functions are relatively well benchmarked at numbagg. I'm somewhat trading off getting through these (rolling functions are coming up too) vs. doing fewer slower, and leaning towards the former, but feedback welcome...
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1964877168 | PR_kwDOAMm_X85d8EmN | 8381 | Allow writing to zarr with differently ordered dims | max-sixty 5635139 | closed | 0 | 2 | 2023-10-27T06:47:59Z | 2023-11-25T21:02:20Z | 2023-11-15T18:09:08Z | MEMBER | 0 | pydata/xarray/pulls/8381 | Is this reasonable?
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2005419839 | PR_kwDOAMm_X85gFPfF | 8474 | Improve "variable not found" error message | max-sixty 5635139 | closed | 0 | 0 | 2023-11-22T01:52:47Z | 2023-11-24T18:49:39Z | 2023-11-24T18:49:38Z | MEMBER | 0 | pydata/xarray/pulls/8474 | One very small step as part of https://github.com/pydata/xarray/issues/8264. The existing error is just This PR creates a new test file. I don't love the format here — it's difficult to snapshot an error message, so it requires copying & pasting things, which doesn't scale well, and the traceback contains environment-specific lines such that it wouldn't be feasible to paste tracebacks. (here's what we do in PRQL, which is (immodestly) great) An alternative is just to put these in the mix of all the other tests; am open to that (and not difficult to change later) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8474/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2006891782 | PR_kwDOAMm_X85gKSKW | 8478 | Add whatsnew for #8475 | max-sixty 5635139 | closed | 0 | 0 | 2023-11-22T18:22:19Z | 2023-11-22T18:45:23Z | 2023-11-22T18:45:22Z | MEMBER | 0 | pydata/xarray/pulls/8478 | Sorry, forgot in the original PR |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2005656379 | PR_kwDOAMm_X85gGCSj | 8475 | Allow `rank` to run on dask arrays | max-sixty 5635139 | closed | 0 | 0 | 2023-11-22T06:22:44Z | 2023-11-22T16:45:03Z | 2023-11-22T16:45:02Z | MEMBER | 0 | pydata/xarray/pulls/8475 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2005744975 | PR_kwDOAMm_X85gGVaY | 8476 | Fix mypy tests | max-sixty 5635139 | closed | 0 | 0 | 2023-11-22T07:36:43Z | 2023-11-22T08:01:13Z | 2023-11-22T08:01:12Z | MEMBER | 0 | pydata/xarray/pulls/8476 | I was seeing an error in #8475 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2000139267 | PR_kwDOAMm_X85fzghA | 8464 | Fix `map_blocks` docs' formatting | max-sixty 5635139 | closed | 0 | 1 | 2023-11-18T01:18:02Z | 2023-11-21T18:25:16Z | 2023-11-21T18:25:15Z | MEMBER | 0 | pydata/xarray/pulls/8464 | Was looking funky. Not 100% sure this is correct but seems consistent with the others |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2000154383 | PR_kwDOAMm_X85fzju6 | 8466 | Move Sphinx directives out of `See also` | max-sixty 5635139 | open | 0 | 2 | 2023-11-18T01:57:17Z | 2023-11-21T18:25:05Z | MEMBER | 0 | pydata/xarray/pulls/8466 | This is potentially causing the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2000146978 | PR_kwDOAMm_X85fziKs | 8465 | Consolidate `_get_alpha` func | max-sixty 5635139 | closed | 0 | 0 | 2023-11-18T01:37:25Z | 2023-11-21T18:24:52Z | 2023-11-21T18:24:51Z | MEMBER | 0 | pydata/xarray/pulls/8465 | Am changing this a bit so starting with consolidating it rather than converting twice |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
400444797 | MDExOlB1bGxSZXF1ZXN0MjQ1NjMwOTUx | 2687 | Enable resampling on PeriodIndex | max-sixty 5635139 | closed | 0 | 2 | 2019-01-17T20:13:25Z | 2023-11-17T20:38:44Z | 2023-11-17T20:38:44Z | MEMBER | 0 | pydata/xarray/pulls/2687 | This allows resampling with I'm still getting one failure around the name of the IndexVariable still being
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2687/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1995308522 | I_kwDOAMm_X8527f3q | 8454 | Formalize `mode` / safety guarantees for Zarr | max-sixty 5635139 | open | 0 | 1 | 2023-11-15T18:28:38Z | 2023-11-15T20:38:04Z | MEMBER | What is your issue?It sounds like we're coalescing on when it's safe to write concurrently:
- What are the existing operations that aren't consistent with this?
- Is concurrently writing additional variables safe? Or it requires updating the centralized consolidated metadata? Currently that requires |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1953001043 | I_kwDOAMm_X850aG5T | 8343 | Add `metadata_only` param to `.to_zarr`? | max-sixty 5635139 | open | 0 | 17 | 2023-10-19T20:25:11Z | 2023-11-15T05:22:12Z | MEMBER | Is your feature request related to a problem?A leaf from https://github.com/pydata/xarray/issues/8245, which has a bullet:
I've also noticed that for large arrays, running Describe the solution you'd likeWould introducing a Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8343/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1980019336 | I_kwDOAMm_X852BLKI | 8421 | `to_zarr` could transpose dims | max-sixty 5635139 | closed | 0 | 0 | 2023-11-06T20:38:35Z | 2023-11-14T19:23:08Z | 2023-11-14T19:23:08Z | MEMBER | Is your feature request related to a problem?Currently we need to know the order of dims when using Here's an MCVE: ```python ds = xr.tutorial.load_dataset('air_temperature') ds.to_zarr('foo', mode='w') ds.transpose(..., 'lat').to_zarr('foo', mode='r+') ValueError: variable 'air' already exists with different dimension names ('time', 'lat', 'lon') != ('time', 'lon', 'lat'), but changing variable dimensions is not supported by to_zarr().``` Describe the solution you'd likeI think we should be able to transpose them based on the target? Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8421/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1986643906 | I_kwDOAMm_X852acfC | 8437 | Restrict pint test runs | max-sixty 5635139 | open | 0 | 10 | 2023-11-10T00:50:52Z | 2023-11-13T21:57:45Z | MEMBER | What is your issue?Pint tests are failing on main — https://github.com/pydata/xarray/actions/runs/6817674274/job/18541677930
If we can't fix soon, should we disable? CC @keewis |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1986758555 | PR_kwDOAMm_X85fGE95 | 8438 | Rename `to_array` to `to_dataarray` | max-sixty 5635139 | closed | 0 | 2 | 2023-11-10T02:58:21Z | 2023-11-10T06:15:03Z | 2023-11-10T06:15:02Z | MEMBER | 0 | pydata/xarray/pulls/8438 | This is a very minor nit, so I'm not sure it's worth changing. What do others think? (I would have opened an issue but it's just as quick to just do the PR) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8438/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
874039546 | MDU6SXNzdWU4NzQwMzk1NDY= | 5246 | test_save_mfdataset_compute_false_roundtrip fails | max-sixty 5635139 | open | 0 | 1 | 2021-05-02T20:41:48Z | 2023-11-02T04:38:05Z | MEMBER | What happened:
Here's the traceback: ```python self = <xarray.tests.test_backends.TestDask object at 0x000001FF45A9B640>
Anything else we need to know?: xfailed in https://github.com/pydata/xarray/pull/5245 Environment: [Eliding since it's the test env] |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1923431725 | I_kwDOAMm_X85ypT0t | 8264 | Improve error messages | max-sixty 5635139 | open | 0 | 4 | 2023-10-03T06:42:57Z | 2023-10-24T18:40:04Z | MEMBER | Is your feature request related to a problem?Coming back to xarray, and using it based on what I remember from a year ago or so, means I make lots of mistakes. I've also been using it outside of a repl, where error messages are more important, given I can't explore a dataset inline. Some of the error messages could be much more helpful. Take one example:
The second sentence is nice. But the first could be give us much more information:
- Which variables conflict? I'm merging four objects, so would be so helpful to know which are causing the issue.
- What is the conflict? Is one a superset and I can Having these good is really useful, lets folks stay in the flow while they're working, and it signals that we're a well-built, refined library. Describe the solution you'd likeI'm not sure the best way to surface the issues — error messages make for less legible contributions than features or bug fixes, and the primary audience for good error messages is often the opposite of those actively developing the library. They're also more difficult to manage as GH issues — there could be scores of marginal issues which would often be out of date. One thing we do in PRQL is have a file that snapshots error messages Any other ideas? Describe alternatives you've consideredNo response Additional contextA couple of specific error-message issues: - https://github.com/pydata/xarray/issues/2078 - https://github.com/pydata/xarray/issues/5290 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8264/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1952859208 | PR_kwDOAMm_X85dTmUR | 8341 | Deprecate tuples of chunks? | max-sixty 5635139 | closed | 0 | 1 | 2023-10-19T18:44:25Z | 2023-10-21T01:45:28Z | 2023-10-21T00:49:19Z | MEMBER | 0 | pydata/xarray/pulls/8341 | (I was planning on putting an issue in, but then thought it wasn't much more difficult to make the PR. But it's totally fine if we don't think this is a good idea...) Allowing a tuple of dims means we're reliant on dimension order, which we really try and not be reliant on. It also makes the type signature even more complicated. So are we OK to encourage a dict of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8341/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1953143391 | PR_kwDOAMm_X85dUk-m | 8347 | 2023.10.1 release notes | max-sixty 5635139 | closed | 0 | 0 | 2023-10-19T22:19:43Z | 2023-10-19T22:42:48Z | 2023-10-19T22:42:47Z | MEMBER | 0 | pydata/xarray/pulls/8347 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1948037836 | PR_kwDOAMm_X85dDNka | 8325 | internal: Improve version handling for numbagg | max-sixty 5635139 | closed | 0 | 1 | 2023-10-17T18:45:43Z | 2023-10-19T15:59:15Z | 2023-10-19T15:59:14Z | MEMBER | 0 | pydata/xarray/pulls/8325 | Uses the approach in #8316, a bit nicer. Only internal. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1948548087 | PR_kwDOAMm_X85dE9ga | 8329 | Request to adjust pyright config | max-sixty 5635139 | closed | 0 | 3 | 2023-10-18T01:04:00Z | 2023-10-18T20:10:42Z | 2023-10-18T20:10:41Z | MEMBER | 0 | pydata/xarray/pulls/8329 | Would it be possible to not have this config? It overrides the local VS Code config, and means VS Code constantly is reporting errors for me. Totally open to other approaches ofc. Or that we decide that the tradeoff is worthwhile |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1948529004 | PR_kwDOAMm_X85dE5aA | 8327 | Add docs to `reindex_like` re broadcasting | max-sixty 5635139 | closed | 0 | 0 | 2023-10-18T00:46:52Z | 2023-10-18T18:16:43Z | 2023-10-18T16:51:12Z | MEMBER | 0 | pydata/xarray/pulls/8327 | This wasn't clear to me so I added some examples & a reference to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1943054301 | PR_kwDOAMm_X85cyrdc | 8307 | Add `corr`, `cov`, `std` & `var` to `.rolling_exp` | max-sixty 5635139 | closed | 0 | 0 | 2023-10-14T07:25:31Z | 2023-10-18T17:35:35Z | 2023-10-18T16:55:35Z | MEMBER | 0 | pydata/xarray/pulls/8307 | From the new routines in numbagg. Maybe needs better tests (though these are quite heavily tested in numbagg), docs, and potentially need to think about types (maybe existing binary ops can help here?) (will fail while the build is cached on an old version of numbagg) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1948537810 | PR_kwDOAMm_X85dE7Te | 8328 | Refine curvefit doctest | max-sixty 5635139 | closed | 0 | 0 | 2023-10-18T00:55:16Z | 2023-10-18T01:19:27Z | 2023-10-18T01:19:26Z | MEMBER | 0 | pydata/xarray/pulls/8328 | A very small change |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1946081841 | PR_kwDOAMm_X85c8kKB | 8321 | Remove a couple of trailing commas in tests | max-sixty 5635139 | closed | 0 | 0 | 2023-10-16T20:57:04Z | 2023-10-16T21:26:50Z | 2023-10-16T21:26:49Z | MEMBER | 0 | pydata/xarray/pulls/8321 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1913983402 | I_kwDOAMm_X85yFRGq | 8233 | numbagg & flox | max-sixty 5635139 | closed | 0 | 13 | 2023-09-26T17:33:32Z | 2023-10-15T07:48:56Z | 2023-10-09T15:40:29Z | MEMBER | What is your issue?I've been doing some work recently on our old friend numbagg, improving the ewm routines & adding some more. I'm keen to get numbagg back in shape, doing the things that it does best, and trimming anything it doesn't. I notice that it has grouped calcs. Am I correct to think that flox does this better? I haven't been up with the latest. flox looks like it's particularly focused on dask arrays, whereas numpy_groupies, one of the inspirations for this, was applicable to numpy arrays too. At least from the xarray perspective, are we OK to deprecate these numbagg functions, and direct folks to flox? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1920172346 | PR_kwDOAMm_X85blZOk | 8256 | Accept `lambda` for `other` param | max-sixty 5635139 | closed | 0 | 0 | 2023-09-30T08:24:36Z | 2023-10-14T07:26:28Z | 2023-09-30T18:50:33Z | MEMBER | 0 | pydata/xarray/pulls/8256 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1931467868 | PR_kwDOAMm_X85cLSzK | 8283 | Ask bug reporters to confirm they're using a recent version of xarray | max-sixty 5635139 | closed | 0 | 0 | 2023-10-07T19:07:17Z | 2023-10-14T07:26:28Z | 2023-10-09T13:30:03Z | MEMBER | 0 | pydata/xarray/pulls/8283 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1931584082 | PR_kwDOAMm_X85cLpuZ | 8286 | Fix `GroupBy` import | max-sixty 5635139 | closed | 0 | 0 | 2023-10-08T01:15:37Z | 2023-10-14T07:26:28Z | 2023-10-09T13:38:44Z | MEMBER | 0 | pydata/xarray/pulls/8286 | Not sure why this only breaks tests for me, vs. in CI, but hopefully no downside to this change... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1931581491 | PR_kwDOAMm_X85cLpMS | 8284 | Enable `.rolling_exp` to work on dask arrays | max-sixty 5635139 | closed | 0 | 0 | 2023-10-08T01:06:04Z | 2023-10-14T07:26:27Z | 2023-10-10T06:37:20Z | MEMBER | 0 | pydata/xarray/pulls/8284 | Another benefit of the move to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8284/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1931582554 | PR_kwDOAMm_X85cLpap | 8285 | Add `min_weight` param to `rolling_exp` functions | max-sixty 5635139 | closed | 0 | 2 | 2023-10-08T01:09:59Z | 2023-10-14T07:24:48Z | 2023-10-14T07:24:48Z | MEMBER | 0 | pydata/xarray/pulls/8285 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1939241220 | PR_kwDOAMm_X85cmBPP | 8296 | mypy 1.6.0 passing | max-sixty 5635139 | closed | 0 | 4 | 2023-10-12T06:04:46Z | 2023-10-12T22:13:18Z | 2023-10-12T19:06:13Z | MEMBER | 0 | pydata/xarray/pulls/8296 | I did the easy things, but will need help for the final couple on Because we don't pin mypy (should we?), this blocks other PRs if we gate them on mypy passing |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1940614908 | PR_kwDOAMm_X85cqvBb | 8299 | xfail flaky test | max-sixty 5635139 | closed | 0 | 0 | 2023-10-12T19:03:59Z | 2023-10-12T22:00:51Z | 2023-10-12T22:00:47Z | MEMBER | 0 | pydata/xarray/pulls/8299 | Would be better to fix it, but in lieu of fixing, better to skip it |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1920359276 | PR_kwDOAMm_X85bl9er | 8257 | Mandate kwargs on `to_zarr` | max-sixty 5635139 | closed | 0 | 0 | 2023-09-30T18:33:13Z | 2023-10-12T18:33:15Z | 2023-10-04T19:05:02Z | MEMBER | 0 | pydata/xarray/pulls/8257 | This aleviates some of the dangers of having these in a different order between Technically it's a breaking change, but only very technically, given that I would wager literally no one has a dozen positional arguments to this method. So I think it's OK. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1926810300 | PR_kwDOAMm_X85b7rlX | 8273 | Allow a function in `.sortby` method | max-sixty 5635139 | closed | 0 | 0 | 2023-10-04T19:04:03Z | 2023-10-12T18:33:14Z | 2023-10-06T03:35:22Z | MEMBER | 0 | pydata/xarray/pulls/8273 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1931585098 | PR_kwDOAMm_X85cLp7r | 8287 | Rename `reset_encoding` to `drop_encoding` | max-sixty 5635139 | closed | 0 | 1 | 2023-10-08T01:19:25Z | 2023-10-12T17:11:07Z | 2023-10-12T17:11:03Z | MEMBER | 0 | pydata/xarray/pulls/8287 | Closes #8259 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1920369929 | I_kwDOAMm_X85ydoUJ | 8259 | Should `.reset_encoding` be `.drop_encoding`? | max-sixty 5635139 | closed | 0 | 1 | 2023-09-30T19:11:46Z | 2023-10-12T17:11:06Z | 2023-10-12T17:11:06Z | MEMBER | What is your issue?Not the greatest issue facing the universe — but for the cause of consistency — should For comparison:
- Also ref #8258 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8259/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1917929597 | PR_kwDOAMm_X85bd2nm | 8249 | Refine `chunks=None` handling | max-sixty 5635139 | closed | 0 | 0 | 2023-09-28T16:54:59Z | 2023-10-04T18:34:27Z | 2023-09-28T20:01:13Z | MEMBER | 0 | pydata/xarray/pulls/8249 | Based on comment in https://github.com/pydata/xarray/pull/8247. This doesn't make it perfect, but allows the warning to get hit and clarifies the type comment, as a stop-gap |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1216647336 | PR_kwDOAMm_X8421oXV | 6521 | Move license from readme to LICENSE | max-sixty 5635139 | open | 0 | 3 | 2022-04-27T00:59:03Z | 2023-10-01T09:31:37Z | MEMBER | 0 | pydata/xarray/pulls/6521 | { "url": "https://api.github.com/repos/pydata/xarray/issues/6521/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||||
1918061661 | I_kwDOAMm_X85yU0xd | 8251 | `.chunk()` doesn't create chunks on 0 dim arrays | max-sixty 5635139 | open | 0 | 0 | 2023-09-28T18:30:50Z | 2023-09-30T21:31:05Z | MEMBER | What happened?
``` """Coerce this array's data into a dask arrays with the given chunks.
``` ...but this doesn't happen for 0 dim arrays; example below. For context, as part of #8245, I had a function that creates a template array. It created an empty What did you expect to happen?It may be that we can't have a 0-dim dask array — but then we should raise in this method, rather than return the wrong thing. Minimal Complete Verifiable Example```Python [ins] In [1]: type(xr.DataArray().chunk().data) Out[1]: numpy.ndarray [ins] In [2]: type(xr.DataArray(1).chunk().data) Out[2]: numpy.ndarray [ins] In [3]: type(xr.DataArray([1]).chunk().data) Out[3]: dask.array.core.Array ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: 0d6cd2a39f61128e023628c4352f653537585a12
python: 3.9.18 (main, Aug 24 2023, 21:19:58)
[Clang 14.0.3 (clang-1403.0.22.14.1)]
python-bits: 64
OS: Darwin
OS-release: 22.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2023.8.1.dev25+g8215911a.d20230914
pandas: 2.1.1
numpy: 1.25.2
scipy: 1.11.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.16.0
cftime: None
nc_time_axis: None
PseudoNetCDF: None
iris: None
bottleneck: None
dask: 2023.4.0
distributed: 2023.7.1
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: 0.2.3.dev30+gd26e29e
fsspec: 2021.11.1
cupy: None
pint: None
sparse: None
flox: 0.7.2
numpy_groupies: 0.9.19
setuptools: 68.1.2
pip: 23.2.1
conda: None
pytest: 7.4.0
mypy: 1.5.1
IPython: 8.15.0
sphinx: 4.3.2
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1920167070 | I_kwDOAMm_X85yc2ye | 8255 | Allow a `lambda` for the `other` param to `where` | max-sixty 5635139 | closed | 0 | 1 | 2023-09-30T08:05:54Z | 2023-09-30T19:02:42Z | 2023-09-30T19:02:42Z | MEMBER | Is your feature request related to a problem?Currently we allow:
...but we don't allow:
...which would be nice Describe the solution you'd likeNo response Describe alternatives you've consideredI don't think this offers many downsides — it's not like we want to fill the array with a callable object. Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
124154674 | MDU6SXNzdWUxMjQxNTQ2NzQ= | 688 | Keep attrs & Add a 'keep_coords' argument to Dataset.apply | max-sixty 5635139 | closed | 0 | 14 | 2015-12-29T02:42:48Z | 2023-09-30T18:47:07Z | 2023-09-30T18:47:07Z | MEMBER | Generally this isn't a problem, since the coords are carried over by the resulting ``` python In [11]: ds = xray.Dataset({ 'a':pd.DataFrame(pd.np.random.rand(10,3)), 'b':pd.Series(pd.np.random.rand(10)) }) ds.coords['c'] = pd.Series(pd.np.random.rand(10)) ds Out[11]: <xray.Dataset> Dimensions: (dim_0: 10, dim_1: 3) Coordinates: * dim_0 (dim_0) int64 0 1 2 3 4 5 6 7 8 9 * dim_1 (dim_1) int64 0 1 2 c (dim_0) float64 0.9318 0.2899 0.3853 0.6235 0.9436 0.7928 ... Data variables: a (dim_0, dim_1) float64 0.5707 0.9485 0.3541 0.5987 0.406 0.7992 ... b (dim_0) float64 0.4106 0.2316 0.5804 0.6393 0.5715 0.6463 ... In [12]: ds.apply(lambda x: x*2) Out[12]: <xray.Dataset> Dimensions: (dim_0: 10, dim_1: 3) Coordinates: c (dim_0) float64 0.9318 0.2899 0.3853 0.6235 0.9436 0.7928 ... * dim_0 (dim_0) int64 0 1 2 3 4 5 6 7 8 9 * dim_1 (dim_1) int64 0 1 2 Data variables: a (dim_0, dim_1) float64 1.141 1.897 0.7081 1.197 0.812 1.598 ... b (dim_0) float64 0.8212 0.4631 1.161 1.279 1.143 1.293 0.3507 ... ``` But if there's an operation that removes the coords from the ``` python In [13]: ds = xray.Dataset({ 'a':pd.DataFrame(pd.np.random.rand(10,3)), 'b':pd.Series(pd.np.random.rand(10)) }) ds.coords['c'] = pd.Series(pd.np.random.rand(10)) ds Out[13]: <xray.Dataset> Dimensions: (dim_0: 10, dim_1: 3) Coordinates: * dim_0 (dim_0) int64 0 1 2 3 4 5 6 7 8 9 * dim_1 (dim_1) int64 0 1 2 c (dim_0) float64 0.4121 0.2507 0.6326 0.4031 0.6169 0.441 0.1146 ... Data variables: a (dim_0, dim_1) float64 0.4813 0.2479 0.5158 0.2787 0.06672 ... b (dim_0) float64 0.2638 0.5788 0.6591 0.7174 0.3645 0.5655 ... In [14]: ds.apply(lambda x: x.to_pandas()*2) Out[14]: <xray.Dataset> Dimensions: (dim_0: 10, dim_1: 3) Coordinates: * dim_0 (dim_0) int64 0 1 2 3 4 5 6 7 8 9 * dim_1 (dim_1) int64 0 1 2 Data variables: a (dim_0, dim_1) float64 0.9627 0.4957 1.032 0.5574 0.1334 0.8289 ... b (dim_0) float64 0.5275 1.158 1.318 1.435 0.7291 1.131 0.1903 ... ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1916391948 | PR_kwDOAMm_X85bYlaM | 8242 | Add modules to `check-untyped` | max-sixty 5635139 | closed | 0 | 2 | 2023-09-27T21:56:45Z | 2023-09-29T17:43:07Z | 2023-09-29T16:39:34Z | MEMBER | 0 | pydata/xarray/pulls/8242 | In reviewing https://github.com/pydata/xarray/pull/8241, I realize that we actually want Errors with this enabled are actual type errors, not just |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1878288525 | PR_kwDOAMm_X85ZYos5 | 8139 | Fix pandas' `interpolate(fill_value=)` error | max-sixty 5635139 | closed | 0 | 6 | 2023-09-02T02:41:45Z | 2023-09-28T16:48:51Z | 2023-09-04T18:05:14Z | MEMBER | 0 | pydata/xarray/pulls/8139 | Pandas no longer has a Weirdly I wasn't getting this locally, on pandas 2.1.0, only in CI on https://github.com/pydata/xarray/actions/runs/6054400455/job/16431747966?pr=8138. Removing it passes locally, let's see whether this works in CI Would close #8125 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);