home / github

Menu
  • Search all tables
  • GraphQL API

pull_requests

Table actions
  • GraphQL API for pull_requests

11 rows where user = 39069044

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
564334845 MDExOlB1bGxSZXF1ZXN0NTY0MzM0ODQ1 4849 closed 0 Basic curvefit implementation slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #4300 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst` This is a simple implementation of a more general curve-fitting API as discussed in #4300, using the existing scipy `curve_fit` functionality wrapped with `apply_ufunc`. It works for arbitrary user-supplied 1D functions that ingest numpy arrays. Formatting and nomenclature of the outputs was largely copied from `.polyfit`, but could probably be improved. 2021-01-30T01:28:16Z 2021-03-31T16:55:53Z 2021-03-31T16:55:53Z 2021-03-31T16:55:53Z ddc352faa6de91f266a1749773d08ae8d6f09683     0 2ab8e52da30a6c78ca8b7e242818ad60123dc1fa ba47216ec1cd2f170fd85a10f232be7bf3ecc578 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4849  
772496364 PR_kwDOAMm_X84uC1vs 5933 open 0 Reimplement `.polyfit()` with `apply_ufunc` slevang 39069044 - [x] Closes #4554 - [x] Closes #5629 - [x] Closes #5644 - [ ] Tests added - [x] Passes `pre-commit run --all-files` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Reimplement `polyfit` using `apply_ufunc` rather than `dask.array.linalg.lstsq`. This should solve a number of issues with memory usage and chunking that were reported on the current version of `polyfit`. The main downside is that variables chunked along the fitting dimension cannot be handled with this approach. There is a bunch of fiddly code here for handling the differing outputs from `np.polyfit` depending on the values of the `full` and `cov` args. Depending on the performance implications, we could simplify some by keeping these in `apply_ufunc` and dropping later. Much of this parsing would still be required though, because the only way to get the covariances is to set `cov=True, full=False`. A few minor departures from the previous implementation: 1. The `rank` and `singular_values` diagnostic variables returned by `np.polyfit` are now returned on a pointwise basis, since these can change depending on skipped nans. `np.polyfit` also returns the `rcond` used for each fit which I've included here. 2. As mentioned above, this breaks fitting done along a chunked dimension. To avoid regression, we could set `allow_rechunk=True` and warn about memory implications. 3. Changed default `skipna=True`, since the previous behavior seemed to be a limitation of the computational method. 4. For consistency with the previous version, I included a `transpose` operation to put `degree` as the first dimension. This is arbitrary though, and actually the opposite of how `curvefit` returns ordering. So we could match up with `curvefit` but it would be breaking for polyfit. No new tests have been added since the previous suite was fairly comprehensive. Would be great to get some performance reports on real-world data such as the climate model detrending application in #5629. 2021-11-03T15:29:58Z 2022-10-06T21:42:09Z     3381c5aacaaeecc8a00357896b29f32a95be0b20     0 62b4637d4fcc688a8e2e2c5eece80f64c0605229 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/5933  
904410049 PR_kwDOAMm_X8416DPB 6461 closed 0 Fix `xr.where(..., keep_attrs=True)` bug slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #6444 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Fixes a bug introduced by #4687 where passing a non-xarray object to `x` in `xr.where(cond, x, y, keep_attrs=True)` caused a failure. The `keep_attrs` callable passed to `merge_attrs()` tries to access the attributes of `x` which do not exist in this case. This fix just checks to make sure `x` has attributes, and if not will pass through `keep_attrs=True`. 2022-04-09T03:02:40Z 2022-10-25T22:40:15Z 2022-04-12T02:12:39Z 2022-04-12T02:12:39Z 0cd5285c56c5375da9f4d27836ec41eea4d069f3     0 38ef3b793ffe2cfe58e6f433e0012e16442f2bc3 851dadeb0338403e5021c3fbe80cbc9127ee672d CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/6461  
1044196334 PR_kwDOAMm_X84-PSvu 6978 open 0 fix passing of curvefit kwargs slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #6891 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` 2022-09-01T20:26:01Z 2022-10-11T18:50:45Z     3bf2c9cd6f3d76a55d19f3dd187a449c7523b91c     0 a2e07b0a1bf2a487e01b717776b2f2ec3fbb6366 9d1499e22e2748eeaf088e6a2abc5c34053bf37c CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/6978  
1062097375 PR_kwDOAMm_X84_TlHf 7060 closed 0 More informative error for non-existent zarr store slevang 39069044 - [x] Closes #6484 - [x] Tests added I've often been tripped up by the stack trace noted in #6484. This PR changes two things: 1. Handles the zarr `GroupNotFoundError` error with a more informative `FileNotFoundError`, displaying the path where we didn't find a zarr store. 2. Moves the consolidated metadata warning to after the step of successfully opening the zarr with non-consolidated metadata. This way the warning isn't shown if we are actually trying to open a non-existent zarr store, in which case we only get the error above and no warning. 2022-09-20T21:27:35Z 2022-09-20T22:38:45Z 2022-09-20T22:38:45Z 2022-09-20T22:38:45Z e6791852aa7ec0b126048b0986e205e158ab9601     0 a4efa689607d6229d9748d4470bfa8425dd740fe 716973e41060184beebd64935afe196a805ef481 CONTRIBUTOR
{
    "enabled_by": {
        "login": "max-sixty",
        "id": 5635139,
        "node_id": "MDQ6VXNlcjU2MzUxMzk=",
        "avatar_url": "https://avatars.githubusercontent.com/u/5635139?v=4",
        "gravatar_id": "",
        "url": "https://api.github.com/users/max-sixty",
        "html_url": "https://github.com/max-sixty",
        "followers_url": "https://api.github.com/users/max-sixty/followers",
        "following_url": "https://api.github.com/users/max-sixty/following{/other_user}",
        "gists_url": "https://api.github.com/users/max-sixty/gists{/gist_id}",
        "starred_url": "https://api.github.com/users/max-sixty/starred{/owner}{/repo}",
        "subscriptions_url": "https://api.github.com/users/max-sixty/subscriptions",
        "organizations_url": "https://api.github.com/users/max-sixty/orgs",
        "repos_url": "https://api.github.com/users/max-sixty/repos",
        "events_url": "https://api.github.com/users/max-sixty/events{/privacy}",
        "received_events_url": "https://api.github.com/users/max-sixty/received_events",
        "type": "User",
        "site_admin": false
    },
    "merge_method": "squash",
    "commit_title": "More informative error for non-existent zarr store (#7060)",
    "commit_message": "* more informative error for non-existent zarr store\r\n\r\n* add whats-new"
}
xarray 13221727 https://github.com/pydata/xarray/pull/7060  
1063176070 PR_kwDOAMm_X84_XseG 7063 closed 0 Better dtype preservation for rolling mean on dask array slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #7062 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` This just tests to make sure we at least get the same dtype whether we have a numpy or dask array. 2022-09-21T17:59:07Z 2022-09-22T22:06:08Z 2022-09-22T22:06:08Z 2022-09-22T22:06:08Z 1f4be33365573da19a684dd7f2fc97ace5d28710     0 c3f85c94cf32d9094ce21b67b2318c2a36ac09f0 72bf673374d2d81e607dcd6817c4997fd7487dc0 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7063  
1100908195 PR_kwDOAMm_X85Bnoaj 7229 closed 0 Fix coordinate attr handling in `xr.where(..., keep_attrs=True)` slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #7220 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Reverts the `getattr` method used in `xr.where(..., keep_attrs=True)` from #6461, but keeps handling for scalar inputs. Adds some test cases to ensure consistent attribute handling. 2022-10-26T21:45:01Z 2022-11-30T23:35:29Z 2022-11-30T23:35:29Z 2022-11-30T23:35:29Z 675a3ff6d9fc52fde73d356345848e89a4705aaf     0 fb7013a3188294c4cf436550628e7a918abdd45e 3aa75c8d00a4a2d4acf10d80f76b937cadb666b7 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7229  
1152789787 PR_kwDOAMm_X85Eti0b 7364 closed 0 Handle numpy-only attrs in `xr.where` slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #7362 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` 2022-12-08T00:52:43Z 2022-12-10T21:52:49Z 2022-12-10T21:52:37Z 2022-12-10T21:52:37Z 3b6cd2a2e44e9777f865a2bc1be958ae313f66da     0 12d03f2d6b86d5852ffe5209c94a3062430383b4 6e77f5e8942206b3e0ab08c3621ade1499d8235b CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7364  
1594728535 PR_kwDOAMm_X85fDaBX 8434 closed 0 Automatic region detection and transpose for `to_zarr()` slevang 39069044 - [x] Closes #7702, #8421 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` A quick pass at implementing these two improvements for zarr region writes: 1. allow passing `region={dim: "auto"}`, which opens the existing zarr store and identifies the correct slice to write to, using a variation of the approach suggested by @DahnJ [here](https://github.com/pydata/xarray/issues/7702#issuecomment-1669747481). We also check for non-matching coordinates and non-contiguous indices. 2. automatically transpose dimensions if they otherwise match the existing store but are out of order 2023-11-09T16:15:08Z 2023-11-14T18:34:50Z 2023-11-14T18:34:50Z 2023-11-14T18:34:49Z f0ade3d623676b4aeb1ca0d444d4d9cbfc38a0b7     0 d4b8a0d227ccdf3febe2bd3834c73b171057a194 49bd63a8332c1930a866724a2968b2d880dae25e CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8434  
1759283186 PR_kwDOAMm_X85o3Ify 8809 closed 0 Pass variable name to `encode_zarr_variable` slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes https://github.com/xarray-contrib/xeofs/issues/148 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` The change from https://github.com/pydata/xarray/pull/8672 mostly fixed the issue of serializing a reset multiindex in the backends, but there was an additional niche issue that turned up in xeofs that was causing serialization to still fail on the zarr backend. The issue is that zarr is the only backend that uses a custom version of `encode_cf_variable` called `encode_zarr_variable`, and the way this gets called we don't pass through the `name` of the variable before running `ensure_not_multiindex`. As a minimal fix, this PR just passes `name` through as an additional arg to the general `encode_variable` function. See @benbovy's [comment](https://github.com/pydata/xarray/pull/8672#issuecomment-1929837384) that maybe we should actually unwrap the level coordinate in `reset_index` and clean up the checks in `ensure_not_multiindex`, but I wasn't able to get that working easily. The exact workflow this turned up in involves DataTree and looks like this: ```python import numpy as np import xarray as xr from datatree import DataTree # ND DataArray that gets stacked along a multiindex da = xr.DataArray(np.ones((3, 3)), coords={"dim1": [1, 2, 3], "dim2": [4, 5, 6]}) da = da.stack(feature=["dim1", "dim2"]) # Extract just the stacked coordinates for saving in a dataset ds = xr.Dataset(data_vars={"feature": da.feature}) # Reset the multiindex, which should make things serializable ds = ds.reset_index("feature") dt1 = DataTree() dt2 = DataTree(name="feature", data=ds) dt1["foo"] = dt2 # Somehow in this step, dt1.foo.feature.dim1.variable becomes an IndexVariable again print(type(dt1.foo.feature.dim1.variable)) # Works dt1.to_netcdf("test.nc", mode="w") # Fails dt1.to_zarr("test.zarr", mode="w") ``` But we can reproduce in xarray w… 2024-03-06T16:21:53Z 2024-04-03T14:26:49Z 2024-04-03T14:26:48Z   a54b0e6cf2911f7f1672266dffcf73494063d1a4     0 0fd34adefa43f2dc77ae39b875ef658d613b36f6 473b87f19e164e508566baf7c8750ac4cb5b50f7 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8809  
1802660917 PR_kwDOAMm_X85rcmw1 8904 open 0 Handle extra indexes for zarr region writes slevang 39069044 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Small follow up to #8877. If we're going to drop the indices anyways for region writes, we may as well not raise if they are still in the dataset. This makes the user experience of region writes simpler: ```python ds = xr.tutorial.open_dataset("air_temperature") ds.to_zarr("test.zarr") region = {"time": slice(0, 10)} # This fails unless we remember to ds.drop_vars(["lat", "lon"]) ds.isel(**region).to_zarr("test.zarr", region=region) ``` I find this annoying because I often have a dataset with a bunch of unrelated indexes and have to remember which ones to drop, or use some verbose `set` logic. I thought #8877 might have already done this, but not quite. By just reordering the point at which we drop indices, we can now skip this. We still raise if data vars are passed that don't overlap with the region. cc @dcherian 2024-04-02T14:34:00Z 2024-04-03T19:20:37Z     b2cc98ae16419e7b600d62ecfd6c616c4a6c028c     0 523f18ea66be2058a8d22c5a43e3fbac47f8afcc 97d3a3aaa071fa5341132331abe90ec39f914b52 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8904  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 22.945ms · About: xarray-datasette