home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

1 row where state = "closed", type = "issue" and user = 22743277 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 1 ✖

state 1

  • closed · 1 ✖

repo 1

  • xarray 1
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
924002003 MDU6SXNzdWU5MjQwMDIwMDM= 5483 Cannot interpolate on a multifile .grib array. Single file works fine. Alexander-Serov 22743277 closed 0     1 2021-06-17T14:36:57Z 2022-04-09T15:50:24Z 2022-04-09T15:50:23Z NONE      

What happened: I have multiple .grib files that I am able to successfully open using the xr.open_mfdataset() function and the cfgrib engine. However, I cannot interpolate the opened array due to a NonImplementedError from the dask package. Apparently, internally the interpolation requires some complicated slicing that is not yet there. The latitude and longitude are well within the stored grid. The interpolation works just fine if I open a single file using xr.load_dataset('file.grb', engine='cfgrib'). Since the files are too big, I cannot just load the array completely or resave the array into a single file. So I was wondering whether you might have ideas of a workaround so that I could get to the values I need, until it's implemented in dask. Basically, I just need to extract (interpolate) all variables at a handful of locations.

What you expected to happen: Interpolate the mutlifile grib array along latitude and longitude.

Minimal Complete Verifiable Example:

python dsmf = xr.open_mfdataset(glob('<root_path>/**/*.grb', recursive=True), engine='cfgrib', parallel = True, combine = 'nested', concat_dim='time') dsmf.interp(latitude=48, longitude=12)

Result: Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\tools\miniconda3\envs\my_env\lib\site-packages\xarray\core\dataset.py", line 2989, in interp obj = self if assume_sorted else self.sortby([k for k in coords]) File "C:\tools\miniconda3\envs\my_env\lib\site-packages\xarray\core\dataset.py", line 5920, in sortby return aligned_self.isel(**indices) File "C:\tools\miniconda3\envs\my_env\lib\site-packages\xarray\core\dataset.py", line 2230, in isel var_value = var_value.isel(var_indexers) File "C:\tools\miniconda3\envs\my_env\lib\site-packages\xarray\core\variable.py", line 1135, in isel return self[key] File "C:\tools\miniconda3\envs\my_env\lib\site-packages\xarray\core\variable.py", line 780, in __getitem__ data = as_indexable(self._data)[indexer] File "C:\tools\miniconda3\envs\my_env\lib\site-packages\xarray\core\indexing.py", line 1312, in __getitem__ return array[key] File "C:\tools\miniconda3\envs\my_env\lib\site-packages\dask\array\core.py", line 1749, in __getitem__ dsk, chunks = slice_array(out, self.name, self.chunks, index2, self.itemsize) File "C:\tools\miniconda3\envs\my_env\lib\site-packages\dask\array\slicing.py", line 170, in slice_array dsk_out, bd_out = slice_with_newaxes(out_name, in_name, blockdims, index, itemsize) File "C:\tools\miniconda3\envs\my_env\lib\site-packages\dask\array\slicing.py", line 192, in slice_with_newaxes dsk, blockdims2 = slice_wrap_lists(out_name, in_name, blockdims, index2, itemsize) File "C:\tools\miniconda3\envs\my_env\lib\site-packages\dask\array\slicing.py", line 238, in slice_wrap_lists raise NotImplementedError("Don't yet support nd fancy indexing") NotImplementedError: Don't yet support nd fancy indexing

Anything else we need to know?: Since the files are too big, I am unable to share for the moment, but I suspect the issue might be reproducible on any multifile grib combination.

Environment:

INSTALLED VERSIONS

commit: None python: 3.8.10 | packaged by conda-forge | (default, May 11 2021, 06:25:23) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 158 Stepping 13, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: ('English_United Kingdom', '1252') libhdf5: 1.10.6 libnetcdf: 4.7.3 xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.3 scipy: 1.6.3 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: 0.9.9.0 iris: None bottleneck: None dask: 2021.06.0 distributed: None matplotlib: None cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.1.2 conda: None pytest: 6.2.4 IPython: None sphinx: None

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5483/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 2719.005ms · About: xarray-datasette