issues
70 rows where state = "open" and user = 2448579 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, draft, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2278499376 | PR_kwDOAMm_X85uhFke | 8997 | Zarr: Optimize `region="auto"` detection | dcherian 2448579 | open | 0 | 1 | 2024-05-03T22:13:18Z | 2024-05-04T21:47:39Z | MEMBER | 0 | pydata/xarray/pulls/8997 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8997/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||||
2278510478 | PR_kwDOAMm_X85uhIGP | 8998 | Zarr: Optimize appending | dcherian 2448579 | open | 0 | 0 | 2024-05-03T22:21:44Z | 2024-05-03T22:23:34Z | MEMBER | 1 | pydata/xarray/pulls/8998 | Builds on #8997 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1915997507 | I_kwDOAMm_X85yM81D | 8238 | NamedArray tracking issue | dcherian 2448579 | open | 0 | 12 | 2023-09-27T17:07:58Z | 2024-04-30T12:49:17Z | MEMBER | @andersy005 I think it would be good to keep a running list of NamedArray tasks. I'll start with a rough sketch, please update/edit as you like.
xref #3981 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8238/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2259316341 | I_kwDOAMm_X86Gqm51 | 8965 | Support concurrent loading of variables | dcherian 2448579 | open | 0 | 4 | 2024-04-23T16:41:24Z | 2024-04-29T22:21:51Z | MEMBER | Is your feature request related to a problem?Today if users have to concurrently load multiple variables in a DataArray or Dataset, they have to use dask. It struck me that it'd be pretty easy for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8965/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2187743087 | PR_kwDOAMm_X85ptH1f | 8840 | Grouper, Resampler as public api | dcherian 2448579 | open | 0 | 0 | 2024-03-15T05:16:05Z | 2024-04-21T16:21:34Z | MEMBER | 1 | pydata/xarray/pulls/8840 | Expose Grouper and Resampler as public API TODO: - [ ] Consider avoiding IndexVariable
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2248614324 | I_kwDOAMm_X86GByG0 | 8952 | `isel(multi_index_level_name = MultiIndex.level)` corrupts the MultiIndex | dcherian 2448579 | open | 0 | 1 | 2024-04-17T15:41:39Z | 2024-04-18T13:14:46Z | MEMBER | What happened?From https://github.com/pydata/xarray/discussions/8951 if cc @benbovy What did you expect to happen?No response Minimal Complete Verifiable Example```Python import pandas as pd, xarray as xr, numpy as np xr.set_options(use_flox=True) test = pd.DataFrame() test["x"] = np.arange(100) % 10 test["y"] = np.arange(100) test["z"] = np.arange(100) test["v"] = np.arange(100) d = xr.Dataset.from_dataframe(test) d = d.set_index(index = ["x", "y", "z"]) print(d) m = d.groupby("x").mean() print(m) print(d.xindexes) print(m.isel(x=d.x).xindexes) xr.align(d, m.isel(x=d.x)) res = d.groupby("x") - mprint(res)```
MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2215762637 | PR_kwDOAMm_X85rMHpN | 8893 | Avoid extra read from disk when creating Pandas Index. | dcherian 2448579 | open | 0 | 1 | 2024-03-29T17:44:52Z | 2024-04-08T18:55:09Z | MEMBER | 0 | pydata/xarray/pulls/8893 | { "url": "https://api.github.com/repos/pydata/xarray/issues/8893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||||
2228319306 | I_kwDOAMm_X86E0XRK | 8914 | swap_dims does not propagate indexes properly | dcherian 2448579 | open | 0 | 0 | 2024-04-05T15:36:26Z | 2024-04-05T15:36:27Z | MEMBER | What happened?Found by hypothesis ``` import xarray as xr import numpy as np var = xr.Variable(dims="2", data=np.array(['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000002', '1970-01-01T00:00:00.000000001'], dtype='datetime64[ns]')) var1 = xr.Variable(data=np.array([0], dtype=np.uint32), dims=['1'], attrs={}) state = xr.Dataset() state['2'] = var state = state.stack({"0": ["2"]}) state['1'] = var1 state['1_'] = var1#.copy(deep=True) state = state.swap_dims({"1": "1_"}) xr.testing.assertions._assert_internal_invariants(state, False) ``` This swaps simple pandas indexed dims, but the multi-index that is in the dataset and not affected by the swap_dims op ends up broken. cc @benbovy What did you expect to happen?No response Minimal Complete Verifiable ExampleNo response MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2224297504 | PR_kwDOAMm_X85rpGUH | 8906 | Add invariant check for IndexVariable.name | dcherian 2448579 | open | 0 | 1 | 2024-04-04T02:13:33Z | 2024-04-05T07:12:54Z | MEMBER | 1 | pydata/xarray/pulls/8906 | @benbovy this seems to be the root cause of #8646, the variable name in A good number of tests seem to fail though, so not sure if this is a good chck.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8906/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1997636679 | PR_kwDOAMm_X85frAC_ | 8460 | Add initialize_zarr | dcherian 2448579 | open | 0 | 8 | 2023-11-16T19:45:05Z | 2024-04-02T15:08:01Z | MEMBER | 1 | pydata/xarray/pulls/8460 |
The intended pattern is: ```python
``` cc @slevang |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8460/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 } |
xarray 13221727 | pull | ||||||
2213636579 | I_kwDOAMm_X86D8Wnj | 8887 | resetting multiindex may be buggy | dcherian 2448579 | open | 0 | 1 | 2024-03-28T16:23:38Z | 2024-03-29T07:59:22Z | MEMBER | What happened?Resetting a MultiIndex dim coordinate preserves the MultiIndex levels as IndexVariables. We should either reset the indexes for the multiindex level variables, or warn asking the users to do so This seems to be the root cause exposed by https://github.com/pydata/xarray/pull/8809 cc @benbovy What did you expect to happen?No response Minimal Complete Verifiable Example```Python import numpy as np import xarray as xr ND DataArray that gets stacked along a multiindexda = xr.DataArray(np.ones((3, 3)), coords={"dim1": [1, 2, 3], "dim2": [4, 5, 6]}) da = da.stack(feature=["dim1", "dim2"]) Extract just the stacked coordinates for saving in a datasetds = xr.Dataset(data_vars={"feature": da.feature}) xr.testing.assertions._assert_internal_invariants(ds.reset_index(["feature", "dim1", "dim2"]), check_default_indexes=False) # succeeds xr.testing.assertions._assert_internal_invariants(ds.reset_index(["feature"]), check_default_indexes=False) # fails, but no warning either ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1471685307 | I_kwDOAMm_X85XuCK7 | 7344 | Disable bottleneck by default? | dcherian 2448579 | open | 0 | 11 | 2022-12-01T17:26:11Z | 2024-03-27T00:22:41Z | MEMBER | What is your issue?Our choice to enable bottleneck by default results in quite a few issues about numerical stability and funny dtype behaviour: #7336, #7128, #2370, #1346 (and probably more) Shall we disable it by default? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2187659148 | I_kwDOAMm_X86CZQeM | 8838 | remove xfail from `test_dataarray.test_to_dask_dataframe()` | dcherian 2448579 | open | 0 | 2 | 2024-03-15T03:43:02Z | 2024-03-15T15:33:31Z | MEMBER | What is your issue?when dask-expr is fixed. Added in https://github.com/pydata/xarray/pull/8837 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2021856935 | PR_kwDOAMm_X85g81gb | 8509 | Proof of concept - public Grouper objects | dcherian 2448579 | open | 0 | 0 | 2023-12-02T04:52:27Z | 2024-03-15T05:18:18Z | MEMBER | 1 | pydata/xarray/pulls/8509 | Not for merging, just proof that it can be done nicely :) Now builds on #8840 ~Builds on an older version of #8507~ Try it out! ```python import xarray as xr from xarray.core.groupers import SeasonGrouper, SeasonResampler ds = xr.tutorial.open_dataset("air_temperature") custom seasons!ds.air.groupby(time=SeasonGrouper(["JF", "MAM", "JJAS", "OND"])).mean() ds.air.resample(time=SeasonResampler(["DJF", "MAM", "JJAS", "ON"])).count() ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8509/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2149485914 | I_kwDOAMm_X86AHo1a | 8778 | Stricter defaults for concat, combine, open_mfdataset | dcherian 2448579 | open | 0 | 2 | 2024-02-22T16:43:38Z | 2024-02-23T04:17:40Z | MEMBER | Is your feature request related to a problem?The defaults for
While "convenient" this really just makes the default experience quite bad with hard-to-understand slowdowns. Describe the solution you'd likeI propose we migrate to Unfortunately, this has a pretty big blast radius so we'd need a long deprecation cycle. Describe alternatives you've consideredNo response Additional contextxref https://github.com/pydata/xarray/issues/4824 xref https://github.com/pydata/xarray/issues/1385 xref https://github.com/pydata/xarray/issues/8231 xref https://github.com/pydata/xarray/issues/5381 xref https://github.com/pydata/xarray/issues/2064 xref https://github.com/pydata/xarray/issues/2217 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8778/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
638947370 | MDU6SXNzdWU2Mzg5NDczNzA= | 4156 | writing sparse to netCDF | dcherian 2448579 | open | 0 | 7 | 2020-06-15T15:33:23Z | 2024-01-09T10:14:00Z | MEMBER | I haven't looked at this too closely but it appears that this is a way to save MultiIndexed datasets to netCDF. So we may be able to do cc @fujiisoup |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2064480451 | I_kwDOAMm_X857DXjD | 8582 | Adopt SPEC 0 instead of NEP-29 | dcherian 2448579 | open | 0 | 1 | 2024-01-03T18:36:24Z | 2024-01-03T20:12:05Z | MEMBER | What is your issue?https://docs.xarray.dev/en/stable/getting-started-guide/installing.html#minimum-dependency-versions says that we follow NEP-29, and I think our min versions script also does that. I propose we follow https://scientific-python.org/specs/spec-0000/ In practice, I think this means we mostly drop Python versions earlier. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8582/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2052952379 | I_kwDOAMm_X856XZE7 | 8568 | Raise when assigning attrs to virtual variables (default coordinate arrays) | dcherian 2448579 | open | 0 | 0 | 2023-12-21T19:24:11Z | 2023-12-21T19:24:19Z | MEMBER | Discussed in https://github.com/pydata/xarray/discussions/8567
<sup>Originally posted by **matthew-brett** December 21, 2023</sup>
Sorry for the introductory question, but we (@ivanov and I) ran into this behavior while experimenting:
```python
import numpy as np
data = np.zeros((3, 4, 5))
ds = xr.DataArray(data, dims=('i', 'j', 'k'))
print(ds['k'].attrs)
```
This shows `{}` as we might reasonably expect. But then:
```python
ds['k'].attrs['foo'] = 'bar'
print(ds['k'].attrs)
```
This also gives `{}`, which we found surprising. We worked out why that was, after a little experimentation (the default coordinate arrays seems to get created on the fly and garbage collected immediately). But it took us a little while. Is that as intended? Is there a way of making this less confusing?
Thanks for any help. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8568/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1954809370 | I_kwDOAMm_X850hAYa | 8353 | Update benchmark suite for asv 0.6.1 | dcherian 2448579 | open | 0 | 0 | 2023-10-20T18:13:22Z | 2023-12-19T05:53:21Z | MEMBER | The new asv version comes with decorators for parameterizing and skipping, and the ability to use https://github.com/airspeed-velocity/asv/releases https://asv.readthedocs.io/en/v0.6.1/writing_benchmarks.html#skipping-benchmarks This might help us reduce benchmark times a bit, or at least simplify the code some. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2027147099 | I_kwDOAMm_X854089b | 8523 | tree-reduce the combine for `open_mfdataset(..., parallel=True, combine="nested")` | dcherian 2448579 | open | 0 | 4 | 2023-12-05T21:24:51Z | 2023-12-18T19:32:39Z | MEMBER | Is your feature request related to a problem?When Instead we can tree-reduce the combine (example) by switching to
cc @TomNicholas |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1975400777 | PR_kwDOAMm_X85efqSl | 8408 | Generalize explicit_indexing_adapter | dcherian 2448579 | open | 0 | 0 | 2023-11-03T03:29:40Z | 2023-11-03T03:53:25Z | MEMBER | 1 | pydata/xarray/pulls/8408 | Use |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1950211465 | I_kwDOAMm_X850Pd2J | 8333 | Should NamedArray be interchangeable with other array types? or Should we support the `axis` kwarg? | dcherian 2448579 | open | 0 | 17 | 2023-10-18T16:46:37Z | 2023-10-31T22:26:33Z | MEMBER | What is your issue?Raising @Illviljan's comment from https://github.com/pydata/xarray/pull/8304#discussion_r1363196597. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1952621896 | I_kwDOAMm_X850YqVI | 8337 | Support rolling with numbagg | dcherian 2448579 | open | 0 | 3 | 2023-10-19T16:11:40Z | 2023-10-23T15:46:36Z | MEMBER | Is your feature request related to a problem?We can do plain reductions, and groupby reductions with numbagg. Rolling is the last one left! I don't think coarsen will benefit since it's basically a reshape and reduce on that view, so it should already be accelerated. There may be small gains in handling the boundary conditions but that's probably it. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1954445639 | I_kwDOAMm_X850fnlH | 8350 | optimize align for scalars at least | dcherian 2448579 | open | 0 | 5 | 2023-10-20T14:48:25Z | 2023-10-20T19:17:39Z | MEMBER | What happened?Here's a simple rescaling calculation: ```python import numpy as np import xarray as xr ds = xr.Dataset( {"a": (("x", "y"), np.ones((300, 400))), "b": (("x", "y"), np.ones((300, 400)))} ) mean = ds.mean() # scalar std = ds.std() # scalar rescaled = (ds - mean) / std ``` The profile for the last line shows 30% (!!!) time spent in This is a small example inspired by a ML pipeline where this normalization is happening very many times in a tight loop. cc @benbovy What did you expect to happen?A fast path for when no reindexing needs to happen. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8350/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1943543755 | I_kwDOAMm_X85z2B_L | 8310 | pydata/xarray as monorepo for Xarray and NamedArray | dcherian 2448579 | open | 0 | 1 | 2023-10-14T20:34:51Z | 2023-10-14T21:29:11Z | MEMBER | What is your issue?As we work through refactoring for NamedArray, it's pretty clear that Xarray will depend pretty closely on many files in I propose we use pydata/xarray as a monorepo that serves two packages: NamedArray and Xarray. - We can move as much as is needed to have NamedArray be independent of Xarray, but Xarray will depend quite closely on many utility functions in NamedArray. - We can release both at the same time similar to dask and distributed. - We can re-evaluate if and when NamedArray grows its own community. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8310/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1942893480 | I_kwDOAMm_X85zzjOo | 8306 | keep_attrs for NamedArray | dcherian 2448579 | open | 0 | 0 | 2023-10-14T02:29:54Z | 2023-10-14T02:31:35Z | MEMBER | What is your issue?Copying over @max-sixty's comment from https://github.com/pydata/xarray/pull/8304#discussion_r1358873522
@pydata/xarray Should we just delete the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8306/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1916012703 | I_kwDOAMm_X85yNAif | 8239 | Address repo-review suggestions | dcherian 2448579 | open | 0 | 7 | 2023-09-27T17:18:40Z | 2023-10-02T20:24:34Z | MEMBER | What is your issue?Here's the output from the Scientific Python Repo Review tool. There's an online version here. On mac I run
A lot of these seem fairly easy to fix. I'll note that there's a large number of General
Projects must have a PyProjectSee https://github.com/pydata/xarray/issues/8239#issuecomment-1739363809 <table> <tr><th>?</th><th>Name</th><th>Description</th></tr> <tr style="color: red;"> <td>❌</td> <td>PP305</td> <td> Specifies xfail_strict
</td>
</tr>
<tr style="color: red;">
<td>❌</td>
<td>PP308</td>
<td>
Specifies useful pytest summary
</td>
</tr>
</table>
Pre-commit<table> <tr><th>?</th><th>Name</th><th>Description</th></tr> <tr style="color: red;"> <td>❌</td> <td>PC110</td> <td> Uses blackUse Must have Must have Must have If Should have something like this in
</td>
</tr>
</table>
MyPy<table> <tr><th>?</th><th>Name</th><th>Description</th></tr> <tr style="color: red;"> <td>❌</td> <td>MY101</td> <td> MyPy strict modeMust have
</td>
</tr>
<tr style="color: red;">
<td>❌</td>
<td>MY103</td>
<td>
MyPy warn unreachable
Must have
</td>
</tr>
<tr style="color: red;">
<td>❌</td>
<td>MY104</td>
<td>
MyPy enables ignore-without-code
Must have
</td>
</tr>
<tr style="color: red;">
<td>❌</td>
<td>MY105</td>
<td>
MyPy enables redundant-expr
Must have
</td>
</tr>
<tr style="color: red;">
<td>❌</td>
<td>MY106</td>
<td>
MyPy enables truthy-bool
Must have
</td>
</tr>
</table>
Ruff<table> <tr><th>?</th><th>Name</th><th>Description</th></tr> <tr style="color: red;"> <td>❌</td> <td>RF101</td> <td> Bugbear must be selectedMust select the flake8-bugbear
</td>
</tr>
</table> |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1217566173 | I_kwDOAMm_X85IkpXd | 6528 | cumsum drops index coordinates | dcherian 2448579 | open | 0 | 5 | 2022-04-27T16:04:08Z | 2023-09-22T07:55:56Z | MEMBER | What happened?cumsum drops index coordinates. Seen in #6525, #3417 What did you expect to happen?Preserve index coordinates Minimal Complete Verifiable Example```Python import xarray as xr ds = xr.Dataset( {"foo": (("x",), [7, 3, 1, 1, 1, 1, 1])}, coords={"x": [0, 1, 2, 3, 4, 5, 6]}, ) ds.cumsum("x") ```
Relevant log outputNo response Anything else we need to know?No response Environment
xarray main
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6528/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1859703572 | I_kwDOAMm_X85u2NMU | 8095 | Support `inline_array` kwarg in `open_zarr` | dcherian 2448579 | open | 0 | 2 | 2023-08-21T16:09:38Z | 2023-09-21T20:37:50Z | MEMBER | cc @TomNicholas What happened?There is no way to specify Minimal Complete Verifiable Example```Python import xarray as xr xr.Dataset({"a": xr.DataArray([1.0])}).to_zarr("temp.zarr") ```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1902086612 | PR_kwDOAMm_X85aoYuf | 8206 | flox: Set fill_value=np.nan always. | dcherian 2448579 | open | 0 | 0 | 2023-09-19T02:19:49Z | 2023-09-19T02:23:26Z | MEMBER | 1 | pydata/xarray/pulls/8206 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1812301185 | I_kwDOAMm_X85sBYWB | 8005 | Design for IntervalIndex | dcherian 2448579 | open | 0 | 5 | 2023-07-19T16:30:50Z | 2023-09-09T06:30:20Z | MEMBER | Is your feature request related to a problem?We should add a wrapper for The CF designCF "encoding" for intervals is to use bounds variables. There is an attribute ```python import numpy as np left = np.arange(0.5, 3.6, 1) right = np.arange(1.5, 4.6, 1) bounds = np.stack([left, right]) ds = xr.Dataset( {"data": ("x", [1, 2, 3, 4])}, coords={"x": ("x", [1, 2, 3, 4], {"bounds": "x_bounds"}), "x_bounds": (("bnds", "x"), bounds)}, ) ds ``` A fundamental problem with our current data model is that we lose We would also like to use the "bounds" to enable interval based indexing. Pandas IntervalIndexAll the indexing is easy to implement by wrapping pandas.IntervalIndex, but there is one limitation. Fundamental QuestionTo me, a core question is whether
Describe the solution you'd likeI've prototyped (2) [approach 1 in this notebook) following @benbovy's suggestion
```python
from xarray import Variable
from xarray.indexes import PandasIndex
class XarrayIntervalIndex(PandasIndex):
def __init__(self, index, dim, coord_dtype):
assert isinstance(index, pd.IntervalIndex)
# for PandasIndex
self.index = index
self.dim = dim
self.coord_dtype = coord_dtype
@classmethod
def from_variables(cls, variables, options):
assert len(variables) == 1
(dim,) = tuple(variables)
bounds = options["bounds"]
assert isinstance(bounds, (xr.DataArray, xr.Variable))
(axis,) = bounds.get_axis_num(set(bounds.dims) - {dim})
left, right = np.split(bounds.data, 2, axis=axis)
index = pd.IntervalIndex.from_arrays(left.squeeze(), right.squeeze())
coord_dtype = bounds.dtype
return cls(index, dim, coord_dtype)
def create_variables(self, variables):
from xarray.core.indexing import PandasIndexingAdapter
newvars = {self.dim: xr.Variable(self.dim, PandasIndexingAdapter(self.index))}
return newvars
def __repr__(self):
string = f"Xarray{self.index!r}"
return string
def to_pandas_index(self):
return self.index
@property
def mid(self):
return PandasIndex(self.index.right, self.dim, self.coord_dtype)
@property
def left(self):
return PandasIndex(self.index.right, self.dim, self.coord_dtype)
@property
def right(self):
return PandasIndex(self.index.right, self.dim, self.coord_dtype)
```
Describe alternatives you've consideredI've tried some approaches in this notebook |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8005/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1888576440 | I_kwDOAMm_X85wkWO4 | 8162 | Update group by multi index | dcherian 2448579 | open | 0 | 0 | 2023-09-09T04:50:29Z | 2023-09-09T04:50:39Z | MEMBER | ideally The goal is to avoid calling There are actually more general issues:
Originally posted by @benbovy in https://github.com/pydata/xarray/issues/8140#issuecomment-1709775666 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1824824446 | I_kwDOAMm_X85sxJx- | 8025 | Support Groupby first, last with flox | dcherian 2448579 | open | 0 | 0 | 2023-07-27T17:07:51Z | 2023-07-27T19:08:06Z | MEMBER | Is your feature request related to a problem?flox recently added support for first, last, nanfirst, nanlast. So we should support that on the Xarray GroupBy object. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
923355397 | MDExOlB1bGxSZXF1ZXN0NjcyMTI5NzY4 | 5480 | Implement weighted groupby | dcherian 2448579 | open | 0 | 1 | 2021-06-17T02:57:17Z | 2023-07-27T18:09:55Z | MEMBER | 1 | pydata/xarray/pulls/5480 |
Initial proof-of-concept. Suggestions to improve this are very welcome. Here's some convenient testing code
``` python ds = xr.tutorial.open_dataset('rasm').load() month_length = ds.time.dt.days_in_month weights = month_length.groupby('time.season') / month_length.groupby('time.season').sum() actual = ds.weighted(month_length).groupby("time.season").mean() expected = (ds * weights).groupby('time.season').sum(skipna=False) xr.testing.assert_allclose(actual, expected) ``` I've added info to the repr
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1822982776 | I_kwDOAMm_X85sqIJ4 | 8023 | Possible autoray integration | dcherian 2448579 | open | 0 | 1 | 2023-07-26T18:57:59Z | 2023-07-26T19:26:05Z | MEMBER | I'm opening this issue for discussion really. I stumbled on autoray (Github) by @jcmgray which provides an abstract interface to a number of array types. What struck me was the very general lazy compute system. This opens up the possibility of lazy-but-not-dask computation. Related: https://github.com/pydata/xarray/issues/2298 https://github.com/pydata/xarray/issues/1725 https://github.com/pydata/xarray/issues/5081 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8023/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 } |
xarray 13221727 | issue | ||||||||
1658291950 | I_kwDOAMm_X85i14bu | 7737 | align ignores `copy` | dcherian 2448579 | open | 0 | 2 | 2023-04-07T02:54:00Z | 2023-06-20T23:07:56Z | MEMBER | Is your feature request related to a problem?cc @benbovy xref #7730 ``` python import numpy as np import xarray as xr arr = np.random.randn(10, 10, 36530) time = xr.date_range("2000", periods=30365, calendar="noleap") da = xr.DataArray(arr, dims=("y", "x", "time"), coords={"time": time}) year = da["time.year"] ```
Describe the solution you'd likeI think we need to check Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1760733017 | I_kwDOAMm_X85o8qdZ | 7924 | Migrate from nbsphinx to myst, myst-nb | dcherian 2448579 | open | 0 | 4 | 2023-06-16T14:17:41Z | 2023-06-20T22:07:42Z | MEMBER | Is your feature request related to a problem?I think we should switch to MyST markdown for our docs. I've been using MyST markdown and MyST-NB in docs in other projects and it works quite well. Advantages: 1. We get HTML reprs in the docs (example) which is a big improvement. (#6620) 2. I think many find markdown a lot easier to write than RST There's a tool to migrate RST to MyST (RTD's migration guide). Describe the solution you'd likeNo response Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7924/reactions", "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
756425955 | MDU6SXNzdWU3NTY0MjU5NTU= | 4648 | Comprehensive benchmarking suite | dcherian 2448579 | open | 0 | 6 | 2020-12-03T18:01:57Z | 2023-06-15T16:56:00Z | MEMBER | I think a good "infrastructure" target for the NASA OSS call would be to expand our benchmarking suite (https://pandas.pydata.org/speed/xarray/#/) AFAIK running these in a useful manner on CI is still unsolved (please correct me if I'm wrong). But we can always run it on an NCAR machine using a cron job. Thoughts? cc @scottyhq A quick survey of work needed (please append): - [ ] indexing & slicing #3382 #2799 #2227 - [ ] DataArray construction #4744 - [ ] attribute access #4741, #4742 - [ ] property access #3514 - [ ] reindexing? https://github.com/pydata/xarray/issues/1385#issuecomment-297539517 - [x] alignment #3755, #7738 - [ ] assignment #1771 - [ ] coarsen - [x] groupby #659 #7795 #7796 - [x] resample #4498 #7795 - [ ] weighted #4482 #3883 - [ ] concat #7824 - [ ] merge - [ ] open_dataset, open_mfdataset #1823 - [ ] stack / unstack - [ ] apply_ufunc? - [x] interp #4740 #7843 - [ ] reprs #4744 - [x] to_(dask)_dataframe #7844 #7474 Related: #3514 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4648/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1700678362 | PR_kwDOAMm_X85QBdXY | 7828 | GroupBy: Fix reducing by subset of grouper dims | dcherian 2448579 | open | 0 | 0 | 2023-05-08T18:00:54Z | 2023-05-10T02:41:39Z | MEMBER | 1 | pydata/xarray/pulls/7828 |
Fixes yet another bug with GroupBy reductions. We weren't assigning the group index when reducing by a subset of dimensions present on the grouper This will only pass when flox 0.7.1 reaches conda-forge. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1236174701 | I_kwDOAMm_X85Jrodt | 6610 | Update GroupBy constructor for grouping by multiple variables, dask arrays | dcherian 2448579 | open | 0 | 6 | 2022-05-15T03:17:54Z | 2023-04-26T16:06:17Z | MEMBER | What is your issue?
To enable this in GroupBy we need to update the constructor's signature to
1. Accept multiple "by" variables.
2. Accept "expected group labels" for grouping by dask variables (like The signature in flox is (may be errors!)
You would calculate that last example using flox as
The use of I propose we update groupby's signature to
1. change So then that example becomes
Thoughts? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6610/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1649611456 | I_kwDOAMm_X85iUxLA | 7704 | follow upstream scipy interpolation improvements | dcherian 2448579 | open | 0 | 0 | 2023-03-31T15:46:56Z | 2023-03-31T15:46:56Z | MEMBER | Is your feature request related to a problem?Scipy 1.10.0 has some great improvements to interpolation (release notes) particularly around the fancier methods like It'd be good to see if we can simplify some of our code (or even enable using these options). Describe the solution you'd likeNo response Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
344614881 | MDU6SXNzdWUzNDQ2MTQ4ODE= | 2313 | Example on using `preprocess` with `mfdataset` | dcherian 2448579 | open | 0 | 6 | 2018-07-25T21:31:34Z | 2023-03-14T12:35:00Z | MEMBER | I wrote this little notebook today while trying to get some satellite data in form that was nice to work with: https://gist.github.com/dcherian/66269bc2b36c2bc427897590d08472d7 I think it would make a useful example for the docs. A few questions: 1. Do you think it'd be a good addition to the examples? 2. Is this the recommended way of adding meaningful co-ordinates, expanding dims etc.? The main bit is this function: ``` def preprocess(ds):
``` Also open to other feedback... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2313/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1599044689 | I_kwDOAMm_X85fT3xR | 7558 | shift time using frequency strings | dcherian 2448579 | open | 0 | 2 | 2023-02-24T17:35:52Z | 2023-02-26T15:08:13Z | MEMBER | Discussed in https://github.com/pydata/xarray/discussions/7557
<sup>Originally posted by **arfriedman** February 24, 2023</sup>
Hi,
In addition to integer offsets, I was wondering if it is possible to [shift](https://docs.xarray.dev/en/stable/generated/xarray.Variable.shift.html) a variable by a specific time frequency interval as in [pandas](https://pandas.pydata.org/docs/reference/api/pandas.Series.shift.html).
For example, something like:
```
import xarray as xr
ds = xr.tutorial.load_dataset("air_temperature")
air = ds["air"]
air.shift(time="1D")
```
Otherwise, is there another xarray function or recommended approach for this type of operation? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1599056009 | I_kwDOAMm_X85fT6iJ | 7559 | Support specifying chunk sizes using labels (e.g. frequency string) | dcherian 2448579 | open | 0 | 2 | 2023-02-24T17:44:03Z | 2023-02-25T03:46:49Z | MEMBER | Is your feature request related to a problem?
I think this would be a useful addition to Describe the solution you'd like
Describe alternatives you've consideredHave the user do this manually but that's kind of annoying, and a bit advanced. Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1119647191 | I_kwDOAMm_X85CvHXX | 6220 | [FEATURE]: Use fast path when grouping by unique monotonic decreasing variable | dcherian 2448579 | open | 0 | 1 | 2022-01-31T16:24:29Z | 2023-01-09T16:48:58Z | MEMBER | Is your feature request related to a problem?See https://github.com/pydata/xarray/pull/6213/files#r795716713 We check whether the Describe the solution you'd likeUpdate the condition to Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6220/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1194945072 | I_kwDOAMm_X85HOWow | 6447 | allow merging datasets where a variable might be a coordinate variable only in a subset of datasets | dcherian 2448579 | open | 0 | 1 | 2022-04-06T17:53:51Z | 2022-11-16T03:46:56Z | MEMBER | Is your feature request related to a problem?Here are two datasets, in one ds1 = xr.Dataset({"a": ('x', [1, 2, 3])})
ds2 = ds1.set_coords("a")
ds2.update(ds1)
MergeError: unable to determine if these variables should be coordinates or not in the merged result: {'a'} ``` Describe the solution you'd likeI think we should replace this error with a warning and arbitrarily choose to either convert Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6447/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
802525282 | MDExOlB1bGxSZXF1ZXN0NTY4NjUzOTg0 | 4868 | facets and hue with hist | dcherian 2448579 | open | 0 | 0 | 2021-02-05T22:49:36Z | 2022-10-19T07:27:32Z | MEMBER | 0 | pydata/xarray/pulls/4868 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
802431534 | MDExOlB1bGxSZXF1ZXN0NTY4NTc1NzIw | 4866 | Refactor line plotting | dcherian 2448579 | open | 0 | 0 | 2021-02-05T19:51:24Z | 2022-10-18T20:13:14Z | MEMBER | 0 | pydata/xarray/pulls/4866 | Refactors line plotting to use a Next i'll use this decorator on see #4288 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1378174355 | I_kwDOAMm_X85SJUWT | 7055 | Use roundtrip context manager in distributed write tests | dcherian 2448579 | open | 0 | 0 | 2022-09-19T15:53:40Z | 2022-09-19T15:53:40Z | MEMBER | What is your issue?File roundtripping tests in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1321228754 | I_kwDOAMm_X85OwFnS | 6845 | Do we need to update AbstractArray for duck arrays? | dcherian 2448579 | open | 0 | 6 | 2022-07-28T16:59:59Z | 2022-07-29T17:20:39Z | MEMBER | What happened?I'm calling Traceback below:
```
--> 25 a = _core.array(a, copy=False)
26 return a.round(decimals, out=out)
27
cupy/_core/core.pyx in cupy._core.core.array()
cupy/_core/core.pyx in cupy._core.core.array()
cupy/_core/core.pyx in cupy._core.core._array_default()
~/miniconda3/envs/gpu/lib/python3.7/site-packages/xarray/core/common.py in __array__(self, dtype)
146
147 def __array__(self: Any, dtype: DTypeLike = None) -> np.ndarray:
--> 148 return np.asarray(self.values, dtype=dtype)
149
150 def __repr__(self) -> str:
~/miniconda3/envs/gpu/lib/python3.7/site-packages/xarray/core/dataarray.py in values(self)
644 type does not support coercion like this (e.g. cupy).
645 """
--> 646 return self.variable.values
647
648 @values.setter
~/miniconda3/envs/gpu/lib/python3.7/site-packages/xarray/core/variable.py in values(self)
517 def values(self):
518 """The variable's data as a numpy.ndarray"""
--> 519 return _as_array_or_item(self._data)
520
521 @values.setter
~/miniconda3/envs/gpu/lib/python3.7/site-packages/xarray/core/variable.py in _as_array_or_item(data)
257 TODO: remove this (replace with np.asarray) once these issues are fixed
258 """
--> 259 data = np.asarray(data)
260 if data.ndim == 0:
261 if data.dtype.kind == "M":
cupy/_core/core.pyx in cupy._core.core.ndarray.__array__()
TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
```
What did you expect to happen?Not an error? I'm not sure what's expected
My question is : Do we need to update Minimal Complete Verifiable ExampleNo response MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
xarray v2022.6.0
cupy 10.6.0
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
540451721 | MDExOlB1bGxSZXF1ZXN0MzU1MjU4NjMy | 3646 | [WIP] GroupBy plotting | dcherian 2448579 | open | 0 | 7 | 2019-12-19T17:26:39Z | 2022-06-09T14:50:17Z | MEMBER | 1 | pydata/xarray/pulls/3646 |
This adds plotting methods to GroupBy objects so that it's easy to plot each group as a facet. I'm finding this super helpful in my current research project. It's pretty self-contained, mostly just adding This still needs more tests but I would like feedback on the feature and the implementation. Example``` python import numpy as np import xarray as xr time = np.arange(80)
da = xr.DataArray(5 * np.sin(2np.pitime/10), coords={"time": time}, dims="time")
da["period"] = da.time.where((time % 10) == 0).ffill("time")/10
da.plot()
```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3646/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
663931851 | MDU6SXNzdWU2NjM5MzE4NTE= | 4251 | expanded attrs makes HTML repr confusing to read | dcherian 2448579 | open | 0 | 2 | 2020-07-22T17:33:13Z | 2022-04-18T03:23:16Z | MEMBER | When the See
Perhaps the gray background could be applied to attrs associated with a variable too? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4251/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1203414243 | I_kwDOAMm_X85HuqTj | 6481 | refactor broadcast for flexible indexes | dcherian 2448579 | open | 0 | 0 | 2022-04-13T14:51:19Z | 2022-04-13T14:51:28Z | MEMBER | What is your issue?From @benbovy in https://github.com/pydata/xarray/pull/6477
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1194790343 | I_kwDOAMm_X85HNw3H | 6445 | map removes non-dimensional coordinate variables | dcherian 2448579 | open | 0 | 0 | 2022-04-06T15:40:40Z | 2022-04-06T15:40:40Z | MEMBER | What happened?
Variables What did you expect to happen?No response Minimal Complete Verifiable ExampleNo response Relevant log outputNo response Anything else we need to know?No response Environmentxarray 2022.03.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1171916710 | I_kwDOAMm_X85F2gem | 6372 | apply_ufunc + dask="parallelized" + no core dimensions should raise a nicer error about core dimensions being absent | dcherian 2448579 | open | 0 | 0 | 2022-03-17T04:25:37Z | 2022-03-17T05:10:16Z | MEMBER | What happened?From https://github.com/pydata/xarray/discussions/6370 Calling
What did you expect to happen?With numpy data the apply_ufunc call does raise an error:
Minimal Complete Verifiable Example``` python import xarray as xr dt = xr.Dataset( data_vars=dict( value=(["x"], [1,1,2,2,2,3,3,3,3,3]), ), coords=dict( lon=(["x"], np.linspace(0,1,10)), ), ).chunk(chunks={'x': tuple([2,3,5])}) # three chunks of different size xr.apply_ufunc( lambda x: np.mean(x), dt, dask="parallelized" ) ``` Relevant log outputNo response Anything else we need to know?No response EnvironmentN/A |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
584461380 | MDU6SXNzdWU1ODQ0NjEzODA= | 3868 | What should pad do about IndexVariables? | dcherian 2448579 | open | 0 | 6 | 2020-03-19T14:40:21Z | 2022-02-22T16:02:21Z | MEMBER | Currently We need to think about 1. Int, Float, Datetime64, CFTime indexes: linearly extrapolate? Should we care whether the index is sorted or not? (I think not) 2. MultiIndexes: ?? 3. CategoricalIndexes: ?? 4. Unindexed dimensions EDIT: Added unindexed dimensions |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3868/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
937266282 | MDU6SXNzdWU5MzcyNjYyODI= | 5578 | Specify minimum versions in setup.cfg | dcherian 2448579 | open | 0 | 2 | 2021-07-05T17:25:03Z | 2022-01-09T03:33:38Z | MEMBER | { "url": "https://api.github.com/repos/pydata/xarray/issues/5578/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | |||||||||
514716299 | MDU6SXNzdWU1MTQ3MTYyOTk= | 3468 | failure when roundtripping empty dataset to pandas | dcherian 2448579 | open | 0 | 1 | 2019-10-30T14:28:31Z | 2021-11-13T14:54:09Z | MEMBER | { "url": "https://api.github.com/repos/pydata/xarray/issues/3468/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | |||||||||
1048856436 | I_kwDOAMm_X84-hEd0 | 5962 | Test resampling with dask arrays | dcherian 2448579 | open | 0 | 0 | 2021-11-09T17:02:45Z | 2021-11-09T17:02:45Z | MEMBER | I noticed that we don't test resampling with dask arrays (well just one). This could be a good opportunity to convert |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1043846371 | I_kwDOAMm_X84-N9Tj | 5934 | add test for custom backend entrypoint | dcherian 2448579 | open | 0 | 0 | 2021-11-03T16:57:14Z | 2021-11-03T16:57:21Z | MEMBER | From https://github.com/pydata/xarray/pull/5931 It would be good to add a test checking that custom backend entrypoints work. This might involve creating a dummy package that registers an entrypoint (https://github.com/pydata/xarray/pull/5931#issuecomment-959131968) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
965072308 | MDU6SXNzdWU5NjUwNzIzMDg= | 5687 | Make cftime dateoffsets public | dcherian 2448579 | open | 0 | 2 | 2021-08-10T14:57:39Z | 2021-08-10T23:28:20Z | MEMBER | Consider the following cftime vector. It's fairly common to see users asking how to subtract "1 month" from this kind of vector:
Subtracting I think pandas exposes this functionality as |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5687/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
938141608 | MDU6SXNzdWU5MzgxNDE2MDg= | 5582 | Faster unstacking of dask arrays | dcherian 2448579 | open | 0 | 0 | 2021-07-06T18:12:05Z | 2021-07-06T18:54:40Z | MEMBER | Recent dask version support assigning to a list of ints along one dimension. we can use this for unstacking (diff builds on #5577) ```diff diff --git i/xarray/core/variable.py w/xarray/core/variable.py index 222e8dab9..a50dfc574 100644 --- i/xarray/core/variable.py +++ w/xarray/core/variable.py @@ -1593,11 +1593,9 @@ class Variable(AbstractArray, NdimSizeLenMixin, VariableArithmetic): else: dtype = self.dtype
This should be what The annoying bit is figuring out when to use this version and what to do with things like dask wrapping sparse. I think we want to loop over each variable in cc @Illviljan if you're interested in implementing this |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
520079199 | MDU6SXNzdWU1MjAwNzkxOTk= | 3497 | how should xarray handle pandas attrs | dcherian 2448579 | open | 0 | 1 | 2019-11-08T15:32:36Z | 2021-07-04T03:31:02Z | MEMBER | Continuing discussion form #3491. Pandas has added @dcherian:
@max-sixty:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3497/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
798586325 | MDU6SXNzdWU3OTg1ODYzMjU= | 4852 | mention HDF files in docs | dcherian 2448579 | open | 0 | 0 | 2021-02-01T18:05:23Z | 2021-07-04T01:24:22Z | MEMBER | This is such a common question that we should address it in the docs. Just saying that some hdf5 files can be opened with |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
797053785 | MDU6SXNzdWU3OTcwNTM3ODU= | 4848 | simplify API reference presentation | dcherian 2448579 | open | 0 | 0 | 2021-01-29T17:23:41Z | 2021-01-29T17:23:46Z | MEMBER | Can we remove |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4848/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
787486472 | MDU6SXNzdWU3ODc0ODY0NzI= | 4817 | Add encoding to HTML repr | dcherian 2448579 | open | 0 | 0 | 2021-01-16T15:14:50Z | 2021-01-24T17:31:31Z | MEMBER | Is your feature request related to a problem? Please describe.
Describe the solution you'd like I think it'd be nice to add it to the HTML repr, collapsed by default. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4817/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
648250671 | MDU6SXNzdWU2NDgyNTA2NzE= | 4189 | List supported options for `backend_kwargs` in `open_dataset` | dcherian 2448579 | open | 0 | 0 | 2020-06-30T15:01:31Z | 2020-12-15T04:28:04Z | MEMBER | We should list supported options for xref #4187 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4189/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
685825824 | MDU6SXNzdWU2ODU4MjU4MjQ= | 4376 | wrong chunk sizes in html repr with nonuniform chunks | dcherian 2448579 | open | 0 | 3 | 2020-08-25T21:23:11Z | 2020-10-07T11:11:23Z | MEMBER | What happened: The HTML repr is using the first element in a chunks tuple; What you expected to happen: it should be using whatever dask does in this case Minimal Complete Verifiable Example: ```python import xarray as xr import dask test = xr.DataArray( dask.array.zeros( (12, 901, 1001), chunks=( (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (1, 899, 1), (1, 199, 1, 199, 1, 199, 1, 199, 1, 199, 1), ), ) ) test.to_dataset(name="a") ``` EDIT: The text repr has the same issue
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
538521723 | MDU6SXNzdWU1Mzg1MjE3MjM= | 3630 | reviewnb for example notebooks? | dcherian 2448579 | open | 0 | 0 | 2019-12-16T16:34:28Z | 2019-12-16T16:34:28Z | MEMBER | What do people think of adding ReviewNB https://www.reviewnb.com/ to facilitate easy reviewing of example notebooks? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3630/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
435787982 | MDU6SXNzdWU0MzU3ODc5ODI= | 2913 | Document xarray data model | dcherian 2448579 | open | 0 | 0 | 2019-04-22T16:23:41Z | 2019-04-22T16:23:41Z | MEMBER | It would be nice to have a separate page that detailed this for users unfamiliar with netCDF. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2913/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);