issues
28 rows where comments = 0, state = "open" and user = 2448579 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: draft, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2278510478 | PR_kwDOAMm_X85uhIGP | 8998 | Zarr: Optimize appending | dcherian 2448579 | open | 0 | 0 | 2024-05-03T22:21:44Z | 2024-05-03T22:23:34Z | MEMBER | 1 | pydata/xarray/pulls/8998 | Builds on #8997 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2187743087 | PR_kwDOAMm_X85ptH1f | 8840 | Grouper, Resampler as public api | dcherian 2448579 | open | 0 | 0 | 2024-03-15T05:16:05Z | 2024-04-21T16:21:34Z | MEMBER | 1 | pydata/xarray/pulls/8840 | Expose Grouper and Resampler as public API TODO: - [ ] Consider avoiding IndexVariable
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2228319306 | I_kwDOAMm_X86E0XRK | 8914 | swap_dims does not propagate indexes properly | dcherian 2448579 | open | 0 | 0 | 2024-04-05T15:36:26Z | 2024-04-05T15:36:27Z | MEMBER | What happened?Found by hypothesis ``` import xarray as xr import numpy as np var = xr.Variable(dims="2", data=np.array(['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000002', '1970-01-01T00:00:00.000000001'], dtype='datetime64[ns]')) var1 = xr.Variable(data=np.array([0], dtype=np.uint32), dims=['1'], attrs={}) state = xr.Dataset() state['2'] = var state = state.stack({"0": ["2"]}) state['1'] = var1 state['1_'] = var1#.copy(deep=True) state = state.swap_dims({"1": "1_"}) xr.testing.assertions._assert_internal_invariants(state, False) ``` This swaps simple pandas indexed dims, but the multi-index that is in the dataset and not affected by the swap_dims op ends up broken. cc @benbovy What did you expect to happen?No response Minimal Complete Verifiable ExampleNo response MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2021856935 | PR_kwDOAMm_X85g81gb | 8509 | Proof of concept - public Grouper objects | dcherian 2448579 | open | 0 | 0 | 2023-12-02T04:52:27Z | 2024-03-15T05:18:18Z | MEMBER | 1 | pydata/xarray/pulls/8509 | Not for merging, just proof that it can be done nicely :) Now builds on #8840 ~Builds on an older version of #8507~ Try it out! ```python import xarray as xr from xarray.core.groupers import SeasonGrouper, SeasonResampler ds = xr.tutorial.open_dataset("air_temperature") custom seasons!ds.air.groupby(time=SeasonGrouper(["JF", "MAM", "JJAS", "OND"])).mean() ds.air.resample(time=SeasonResampler(["DJF", "MAM", "JJAS", "ON"])).count() ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8509/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2052952379 | I_kwDOAMm_X856XZE7 | 8568 | Raise when assigning attrs to virtual variables (default coordinate arrays) | dcherian 2448579 | open | 0 | 0 | 2023-12-21T19:24:11Z | 2023-12-21T19:24:19Z | MEMBER | Discussed in https://github.com/pydata/xarray/discussions/8567
<sup>Originally posted by **matthew-brett** December 21, 2023</sup>
Sorry for the introductory question, but we (@ivanov and I) ran into this behavior while experimenting:
```python
import numpy as np
data = np.zeros((3, 4, 5))
ds = xr.DataArray(data, dims=('i', 'j', 'k'))
print(ds['k'].attrs)
```
This shows `{}` as we might reasonably expect. But then:
```python
ds['k'].attrs['foo'] = 'bar'
print(ds['k'].attrs)
```
This also gives `{}`, which we found surprising. We worked out why that was, after a little experimentation (the default coordinate arrays seems to get created on the fly and garbage collected immediately). But it took us a little while. Is that as intended? Is there a way of making this less confusing?
Thanks for any help. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8568/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1954809370 | I_kwDOAMm_X850hAYa | 8353 | Update benchmark suite for asv 0.6.1 | dcherian 2448579 | open | 0 | 0 | 2023-10-20T18:13:22Z | 2023-12-19T05:53:21Z | MEMBER | The new asv version comes with decorators for parameterizing and skipping, and the ability to use https://github.com/airspeed-velocity/asv/releases https://asv.readthedocs.io/en/v0.6.1/writing_benchmarks.html#skipping-benchmarks This might help us reduce benchmark times a bit, or at least simplify the code some. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1975400777 | PR_kwDOAMm_X85efqSl | 8408 | Generalize explicit_indexing_adapter | dcherian 2448579 | open | 0 | 0 | 2023-11-03T03:29:40Z | 2023-11-03T03:53:25Z | MEMBER | 1 | pydata/xarray/pulls/8408 | Use |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1942893480 | I_kwDOAMm_X85zzjOo | 8306 | keep_attrs for NamedArray | dcherian 2448579 | open | 0 | 0 | 2023-10-14T02:29:54Z | 2023-10-14T02:31:35Z | MEMBER | What is your issue?Copying over @max-sixty's comment from https://github.com/pydata/xarray/pull/8304#discussion_r1358873522
@pydata/xarray Should we just delete the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8306/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1902086612 | PR_kwDOAMm_X85aoYuf | 8206 | flox: Set fill_value=np.nan always. | dcherian 2448579 | open | 0 | 0 | 2023-09-19T02:19:49Z | 2023-09-19T02:23:26Z | MEMBER | 1 | pydata/xarray/pulls/8206 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1888576440 | I_kwDOAMm_X85wkWO4 | 8162 | Update group by multi index | dcherian 2448579 | open | 0 | 0 | 2023-09-09T04:50:29Z | 2023-09-09T04:50:39Z | MEMBER | ideally The goal is to avoid calling There are actually more general issues:
Originally posted by @benbovy in https://github.com/pydata/xarray/issues/8140#issuecomment-1709775666 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1824824446 | I_kwDOAMm_X85sxJx- | 8025 | Support Groupby first, last with flox | dcherian 2448579 | open | 0 | 0 | 2023-07-27T17:07:51Z | 2023-07-27T19:08:06Z | MEMBER | Is your feature request related to a problem?flox recently added support for first, last, nanfirst, nanlast. So we should support that on the Xarray GroupBy object. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1700678362 | PR_kwDOAMm_X85QBdXY | 7828 | GroupBy: Fix reducing by subset of grouper dims | dcherian 2448579 | open | 0 | 0 | 2023-05-08T18:00:54Z | 2023-05-10T02:41:39Z | MEMBER | 1 | pydata/xarray/pulls/7828 |
Fixes yet another bug with GroupBy reductions. We weren't assigning the group index when reducing by a subset of dimensions present on the grouper This will only pass when flox 0.7.1 reaches conda-forge. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1649611456 | I_kwDOAMm_X85iUxLA | 7704 | follow upstream scipy interpolation improvements | dcherian 2448579 | open | 0 | 0 | 2023-03-31T15:46:56Z | 2023-03-31T15:46:56Z | MEMBER | Is your feature request related to a problem?Scipy 1.10.0 has some great improvements to interpolation (release notes) particularly around the fancier methods like It'd be good to see if we can simplify some of our code (or even enable using these options). Describe the solution you'd likeNo response Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
802525282 | MDExOlB1bGxSZXF1ZXN0NTY4NjUzOTg0 | 4868 | facets and hue with hist | dcherian 2448579 | open | 0 | 0 | 2021-02-05T22:49:36Z | 2022-10-19T07:27:32Z | MEMBER | 0 | pydata/xarray/pulls/4868 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
802431534 | MDExOlB1bGxSZXF1ZXN0NTY4NTc1NzIw | 4866 | Refactor line plotting | dcherian 2448579 | open | 0 | 0 | 2021-02-05T19:51:24Z | 2022-10-18T20:13:14Z | MEMBER | 0 | pydata/xarray/pulls/4866 | Refactors line plotting to use a Next i'll use this decorator on see #4288 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1378174355 | I_kwDOAMm_X85SJUWT | 7055 | Use roundtrip context manager in distributed write tests | dcherian 2448579 | open | 0 | 0 | 2022-09-19T15:53:40Z | 2022-09-19T15:53:40Z | MEMBER | What is your issue?File roundtripping tests in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1203414243 | I_kwDOAMm_X85HuqTj | 6481 | refactor broadcast for flexible indexes | dcherian 2448579 | open | 0 | 0 | 2022-04-13T14:51:19Z | 2022-04-13T14:51:28Z | MEMBER | What is your issue?From @benbovy in https://github.com/pydata/xarray/pull/6477
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1194790343 | I_kwDOAMm_X85HNw3H | 6445 | map removes non-dimensional coordinate variables | dcherian 2448579 | open | 0 | 0 | 2022-04-06T15:40:40Z | 2022-04-06T15:40:40Z | MEMBER | What happened?
Variables What did you expect to happen?No response Minimal Complete Verifiable ExampleNo response Relevant log outputNo response Anything else we need to know?No response Environmentxarray 2022.03.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1171916710 | I_kwDOAMm_X85F2gem | 6372 | apply_ufunc + dask="parallelized" + no core dimensions should raise a nicer error about core dimensions being absent | dcherian 2448579 | open | 0 | 0 | 2022-03-17T04:25:37Z | 2022-03-17T05:10:16Z | MEMBER | What happened?From https://github.com/pydata/xarray/discussions/6370 Calling
What did you expect to happen?With numpy data the apply_ufunc call does raise an error:
Minimal Complete Verifiable Example``` python import xarray as xr dt = xr.Dataset( data_vars=dict( value=(["x"], [1,1,2,2,2,3,3,3,3,3]), ), coords=dict( lon=(["x"], np.linspace(0,1,10)), ), ).chunk(chunks={'x': tuple([2,3,5])}) # three chunks of different size xr.apply_ufunc( lambda x: np.mean(x), dt, dask="parallelized" ) ``` Relevant log outputNo response Anything else we need to know?No response EnvironmentN/A |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1048856436 | I_kwDOAMm_X84-hEd0 | 5962 | Test resampling with dask arrays | dcherian 2448579 | open | 0 | 0 | 2021-11-09T17:02:45Z | 2021-11-09T17:02:45Z | MEMBER | I noticed that we don't test resampling with dask arrays (well just one). This could be a good opportunity to convert |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1043846371 | I_kwDOAMm_X84-N9Tj | 5934 | add test for custom backend entrypoint | dcherian 2448579 | open | 0 | 0 | 2021-11-03T16:57:14Z | 2021-11-03T16:57:21Z | MEMBER | From https://github.com/pydata/xarray/pull/5931 It would be good to add a test checking that custom backend entrypoints work. This might involve creating a dummy package that registers an entrypoint (https://github.com/pydata/xarray/pull/5931#issuecomment-959131968) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
938141608 | MDU6SXNzdWU5MzgxNDE2MDg= | 5582 | Faster unstacking of dask arrays | dcherian 2448579 | open | 0 | 0 | 2021-07-06T18:12:05Z | 2021-07-06T18:54:40Z | MEMBER | Recent dask version support assigning to a list of ints along one dimension. we can use this for unstacking (diff builds on #5577) ```diff diff --git i/xarray/core/variable.py w/xarray/core/variable.py index 222e8dab9..a50dfc574 100644 --- i/xarray/core/variable.py +++ w/xarray/core/variable.py @@ -1593,11 +1593,9 @@ class Variable(AbstractArray, NdimSizeLenMixin, VariableArithmetic): else: dtype = self.dtype
This should be what The annoying bit is figuring out when to use this version and what to do with things like dask wrapping sparse. I think we want to loop over each variable in cc @Illviljan if you're interested in implementing this |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
798586325 | MDU6SXNzdWU3OTg1ODYzMjU= | 4852 | mention HDF files in docs | dcherian 2448579 | open | 0 | 0 | 2021-02-01T18:05:23Z | 2021-07-04T01:24:22Z | MEMBER | This is such a common question that we should address it in the docs. Just saying that some hdf5 files can be opened with |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
797053785 | MDU6SXNzdWU3OTcwNTM3ODU= | 4848 | simplify API reference presentation | dcherian 2448579 | open | 0 | 0 | 2021-01-29T17:23:41Z | 2021-01-29T17:23:46Z | MEMBER | Can we remove |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4848/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
787486472 | MDU6SXNzdWU3ODc0ODY0NzI= | 4817 | Add encoding to HTML repr | dcherian 2448579 | open | 0 | 0 | 2021-01-16T15:14:50Z | 2021-01-24T17:31:31Z | MEMBER | Is your feature request related to a problem? Please describe.
Describe the solution you'd like I think it'd be nice to add it to the HTML repr, collapsed by default. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4817/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
648250671 | MDU6SXNzdWU2NDgyNTA2NzE= | 4189 | List supported options for `backend_kwargs` in `open_dataset` | dcherian 2448579 | open | 0 | 0 | 2020-06-30T15:01:31Z | 2020-12-15T04:28:04Z | MEMBER | We should list supported options for xref #4187 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4189/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
538521723 | MDU6SXNzdWU1Mzg1MjE3MjM= | 3630 | reviewnb for example notebooks? | dcherian 2448579 | open | 0 | 0 | 2019-12-16T16:34:28Z | 2019-12-16T16:34:28Z | MEMBER | What do people think of adding ReviewNB https://www.reviewnb.com/ to facilitate easy reviewing of example notebooks? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3630/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
435787982 | MDU6SXNzdWU0MzU3ODc5ODI= | 2913 | Document xarray data model | dcherian 2448579 | open | 0 | 0 | 2019-04-22T16:23:41Z | 2019-04-22T16:23:41Z | MEMBER | It would be nice to have a separate page that detailed this for users unfamiliar with netCDF. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2913/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);