issues
11 rows where repo = 13221727, type = "pull" and user = 39069044 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2220689594 | PR_kwDOAMm_X85rcmw1 | 8904 | Handle extra indexes for zarr region writes | slevang 39069044 | open | 0 | 8 | 2024-04-02T14:34:00Z | 2024-04-03T19:20:37Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8904 |
Small follow up to #8877. If we're going to drop the indices anyways for region writes, we may as well not raise if they are still in the dataset. This makes the user experience of region writes simpler: ```python ds = xr.tutorial.open_dataset("air_temperature") ds.to_zarr("test.zarr") region = {"time": slice(0, 10)} This fails unless we remember to ds.drop_vars(["lat", "lon"])ds.isel(**region).to_zarr("test.zarr", region=region) ``` I find this annoying because I often have a dataset with a bunch of unrelated indexes and have to remember which ones to drop, or use some verbose cc @dcherian |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8904/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2171912634 | PR_kwDOAMm_X85o3Ify | 8809 | Pass variable name to `encode_zarr_variable` | slevang 39069044 | closed | 0 | 6 | 2024-03-06T16:21:53Z | 2024-04-03T14:26:49Z | 2024-04-03T14:26:48Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8809 |
The change from https://github.com/pydata/xarray/pull/8672 mostly fixed the issue of serializing a reset multiindex in the backends, but there was an additional niche issue that turned up in xeofs that was causing serialization to still fail on the zarr backend. The issue is that zarr is the only backend that uses a custom version of As a minimal fix, this PR just passes The exact workflow this turned up in involves DataTree and looks like this: ```python import numpy as np import xarray as xr from datatree import DataTree ND DataArray that gets stacked along a multiindexda = xr.DataArray(np.ones((3, 3)), coords={"dim1": [1, 2, 3], "dim2": [4, 5, 6]}) da = da.stack(feature=["dim1", "dim2"]) Extract just the stacked coordinates for saving in a datasetds = xr.Dataset(data_vars={"feature": da.feature}) Reset the multiindex, which should make things serializableds = ds.reset_index("feature") dt1 = DataTree() dt2 = DataTree(name="feature", data=ds) dt1["foo"] = dt2 Somehow in this step, dt1.foo.feature.dim1.variable becomes an IndexVariable againprint(type(dt1.foo.feature.dim1.variable)) Worksdt1.to_netcdf("test.nc", mode="w") Failsdt1.to_zarr("test.zarr", mode="w") ``` But we can reproduce in xarray with the test added here. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1985969769 | PR_kwDOAMm_X85fDaBX | 8434 | Automatic region detection and transpose for `to_zarr()` | slevang 39069044 | closed | 0 | 15 | 2023-11-09T16:15:08Z | 2023-11-14T18:34:50Z | 2023-11-14T18:34:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8434 |
A quick pass at implementing these two improvements for zarr region writes:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8434/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1483235066 | PR_kwDOAMm_X85Eti0b | 7364 | Handle numpy-only attrs in `xr.where` | slevang 39069044 | closed | 0 | 1 | 2022-12-08T00:52:43Z | 2022-12-10T21:52:49Z | 2022-12-10T21:52:37Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7364 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7364/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1424732975 | PR_kwDOAMm_X85Bnoaj | 7229 | Fix coordinate attr handling in `xr.where(..., keep_attrs=True)` | slevang 39069044 | closed | 0 | 5 | 2022-10-26T21:45:01Z | 2022-11-30T23:35:29Z | 2022-11-30T23:35:29Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7229 |
Reverts the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1198058137 | PR_kwDOAMm_X8416DPB | 6461 | Fix `xr.where(..., keep_attrs=True)` bug | slevang 39069044 | closed | 0 | 4 | 2022-04-09T03:02:40Z | 2022-10-25T22:40:15Z | 2022-04-12T02:12:39Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6461 |
Fixes a bug introduced by #4687 where passing a non-xarray object to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1359368857 | PR_kwDOAMm_X84-PSvu | 6978 | fix passing of curvefit kwargs | slevang 39069044 | open | 0 | 5 | 2022-09-01T20:26:01Z | 2022-10-11T18:50:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6978 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1043746973 | PR_kwDOAMm_X84uC1vs | 5933 | Reimplement `.polyfit()` with `apply_ufunc` | slevang 39069044 | open | 0 | 6 | 2021-11-03T15:29:58Z | 2022-10-06T21:42:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/5933 |
Reimplement There is a bunch of fiddly code here for handling the differing outputs from A few minor departures from the previous implementation:
1. The No new tests have been added since the previous suite was fairly comprehensive. Would be great to get some performance reports on real-world data such as the climate model detrending application in #5629. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5933/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1381297782 | PR_kwDOAMm_X84_XseG | 7063 | Better dtype preservation for rolling mean on dask array | slevang 39069044 | closed | 0 | 1 | 2022-09-21T17:59:07Z | 2022-09-22T22:06:08Z | 2022-09-22T22:06:08Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7063 |
This just tests to make sure we at least get the same dtype whether we have a numpy or dask array. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1380016376 | PR_kwDOAMm_X84_TlHf | 7060 | More informative error for non-existent zarr store | slevang 39069044 | closed | 0 | 2 | 2022-09-20T21:27:35Z | 2022-09-20T22:38:45Z | 2022-09-20T22:38:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7060 |
I've often been tripped up by the stack trace noted in #6484. This PR changes two things:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7060/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
797302408 | MDExOlB1bGxSZXF1ZXN0NTY0MzM0ODQ1 | 4849 | Basic curvefit implementation | slevang 39069044 | closed | 0 | 12 | 2021-01-30T01:28:16Z | 2021-03-31T16:55:53Z | 2021-03-31T16:55:53Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4849 |
This is a simple implementation of a more general curve-fitting API as discussed in #4300, using the existing scipy |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4849/reactions", "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);