home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

2 rows where comments = 4, type = "issue" and user = 14371165 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 1
  • open 1

type 1

  • issue · 2 ✖

repo 1

  • xarray 2
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1318800553 I_kwDOAMm_X85Om0yp 6833 Require a pull request before merging to main Illviljan 14371165 closed 0     4 2022-07-26T22:09:55Z 2023-01-13T16:51:03Z 2023-01-13T16:51:03Z MEMBER      

Is your feature request related to a problem?

I was making sure the test in #6832 failed on main, when it did I wrote a few lines in the what's new file but forgot switching back to the other branch and accidentally pushed directly to main. :(

Describe the solution you'd like

I think it's best if we require a pull request for merging. We seem to pretty much do this anyway.

Seems to be this setting if I understand correctly:

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6833/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
597785475 MDU6SXNzdWU1OTc3ODU0NzU= 3962 Interpolation - Support extrapolation method "clip" Illviljan 14371165 open 0     4 2020-04-10T09:07:13Z 2022-05-02T13:42:24Z   MEMBER      

Hello,

I would like an option in da.interp()that instead of returning NaNs during extrapolation returns the data corresponding to the end of the breakpoint data set range.

One way to do this is to limit the new coordinates to the array coordinates minimum and maximum value, I did a simple example with this solution down below. I think this is a rather safe way as we are just modifying the inputs to all the various interpolation classes that xarray is using at the moment. But it does look a little weird when printing the extrapolated value, the coordinates shows the limited value instead of the requested coordinates. Maybe this can be handled elegantly somewhere in the source code?

MATLAB uses this quite frequently in their interpolation functions: * https://mathworks.com/help/simulink/ug/methods-for-estimating-missing-points.html * https://mathworks.com/help/simulink/slref/2dlookuptable.html

MCVE Code Sample

```python import numpy as np import xarray as xr

def interp(da, coords, extrapolation='clip'): """ Linear interpolation that clips the inputs to the coords min and max value.

Parameters
----------
da : DataArray
    DataArray to interpolate.
coords : dict
    Coordinates for the interpolated value.
"""
if extrapolation == 'clip':
    for k, v in da.coords.items():
        coords[k] = np.maximum(coords[k], np.min(v.values))
        coords[k] = np.minimum(coords[k], np.max(v.values))

return da.interp(coords)

Create coordinates:

x = np.linspace(1000, 6000, 4) y = np.linspace(100, 1200, 3)

Create data:

X = np.meshgrid(*[x, y], indexing='ij') data = X[0] * X[1]

Create DataArray:

da = xr.DataArray(data=data, coords=[('x', x), ('y', y)], name='data')

Attempt to extrapolate:

datai = interp(da, {'x': 7000, 'y': 375}) ```

Expected Output

python print(datai) <xarray.DataArray 'data' ()> array(2250000.) Coordinates: x float64 6e+03 y float64 375.0

Versions

Output of `xr.show_versions()` INSTALLED VERSIONS ------------------ commit: None python: 3.7.7 (default, Mar 23 2020, 23:19:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel byteorder: little LC_ALL: None LANG: en LOCALE: None.None libhdf5: 1.10.4 libnetcdf: None xarray: 0.15.0 pandas: 1.0.3 numpy: 1.18.1 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.13.0 distributed: 2.13.0 matplotlib: 3.1.3 cartopy: None seaborn: 0.10.0 numbagg: None setuptools: 46.1.3.post20200330 pip: 20.0.2 conda: 4.8.3 pytest: 5.4.1 IPython: 7.13.0 sphinx: 2.4.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3962/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 50.851ms · About: xarray-datasette