home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

7 rows where user = 5797727 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • open 4
  • closed 3

type 1

  • issue 7

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1169750048 I_kwDOAMm_X85FuPgg 6360 Multidimensional `interpolate_na()` iuryt 5797727 open 0     4 2022-03-15T14:27:46Z 2023-09-28T11:51:20Z   NONE      

Is your feature request related to a problem?

I think that having a way to run a multidimensional interpolation for filling missing values would be awesome.

The code snippet below create a data and show the problem I am having now. If the data has some orientation, we couldn't simply interpolate dimensions separately.

```python import xarray as xr import numpy as np

n = 30 x = xr.DataArray(np.linspace(0,2np.pi,n),dims=['x']) y = xr.DataArray(np.linspace(0,2np.pi,n),dims=['y']) z = (np.sin(x)*xr.ones_like(y))

mask = xr.DataArray(np.random.randint(0,1+1,(n,n)).astype('bool'),dims=['x','y'])

kw = dict(add_colorbar=False)

fig,ax = plt.subplots(1,3,figsize=(11,3)) z.plot(ax=ax[0],kw) z.where(mask).plot(ax=ax[1],kw) z.where(mask).interpolate_na('x').plot(ax=ax[2],**kw) ```

I tried to use advanced interpolation for that, but it doesn't look like the best solution.

```python zs = z.where(mask).stack(k=['x','y']) zs = zs.where(np.isnan(zs),drop=True) xi,yi = zs.k.x.drop('k'),zs.k.y.drop('k') zi = z.interp(x=xi,y=yi)

fig,ax = plt.subplots() z.where(mask).plot(ax=ax,kw) ax.scatter(xi,yi,c=zi,kw,linewidth=1,edgecolor='k') ``` returns

Describe the solution you'd like

Simply z.interpolate_na(['x','y'])

Describe alternatives you've considered

I could extract the data to numpy and interpolate using scipy.interpolate.griddata, but this is not the way xarray should work.

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6360/reactions",
    "total_count": 11,
    "+1": 9,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 2
}
    xarray 13221727 issue
786347954 MDU6SXNzdWU3ODYzNDc5NTQ= 4814 Error while loading an HDF file iuryt 5797727 closed 0     2 2021-01-14T21:19:34Z 2023-08-14T08:42:46Z 2021-01-22T19:30:48Z NONE      

What happened:

I am trying to read a HDF file from MODIS satellite product. Source of the file: https://ladsweb.modaps.eosdis.nasa.gov/archive/allData/61/MOD06_L2/2015/012/MOD06_L2.A2015012.1510.061.2017318235218.hdf

and

xr.open_dataset('MOD06_L2.A2015012.1510.061.2017318235218.hdf')

returns

Error log

``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~/.local/lib/python3.6/site-packages/xarray/backends/file_manager.py in _acquire_with_cache_info(self, needs_lock) 198 try: --> 199 file = self._cache[self._key] 200 except KeyError: ~/.local/lib/python3.6/site-packages/xarray/backends/lru_cache.py in __getitem__(self, key) 52 with self._lock: ---> 53 value = self._cache[key] 54 self._cache.move_to_end(key) KeyError: [<class 'netCDF4._netCDF4.Dataset'>, ('/project/umd_amit_tandon/iury/data/coldpools/modis/MOD06_L2/2015/001/MOD06_L2.A2015001.0000.061.2017318203346.hdf',), 'r', (('clobber', True), ('diskless', False), ('format', 'NETCDF4'), ('persist', False))] During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-3-3a318c45af2b> in <module>() ----> 1 xr.open_dataset(fnames[0]) ~/.local/lib/python3.6/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, autoclose, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs, use_cftime, decode_timedelta) 507 if engine == "netcdf4": 508 store = backends.NetCDF4DataStore.open( --> 509 filename_or_obj, group=group, lock=lock, **backend_kwargs 510 ) 511 elif engine == "scipy": ~/.local/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in open(cls, filename, mode, format, group, clobber, diskless, persist, lock, lock_maker, autoclose) 356 netCDF4.Dataset, filename, mode=mode, kwargs=kwargs 357 ) --> 358 return cls(manager, group=group, mode=mode, lock=lock, autoclose=autoclose) 359 360 def _acquire(self, needs_lock=True): ~/.local/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in __init__(self, manager, group, mode, lock, autoclose) 312 self._group = group 313 self._mode = mode --> 314 self.format = self.ds.data_model 315 self._filename = self.ds.filepath() 316 self.is_remote = is_remote_uri(self._filename) ~/.local/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in ds(self) 365 @property 366 def ds(self): --> 367 return self._acquire() 368 369 def open_store_variable(self, name, var): ~/.local/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in _acquire(self, needs_lock) 359 360 def _acquire(self, needs_lock=True): --> 361 with self._manager.acquire_context(needs_lock) as root: 362 ds = _nc4_require_group(root, self._group, self._mode) 363 return ds /opt/conda/lib/python3.6/contextlib.py in __enter__(self) 79 def __enter__(self): 80 try: ---> 81 return next(self.gen) 82 except StopIteration: 83 raise RuntimeError("generator didn't yield") from None ~/.local/lib/python3.6/site-packages/xarray/backends/file_manager.py in acquire_context(self, needs_lock) 185 def acquire_context(self, needs_lock=True): 186 """Context manager for acquiring a file.""" --> 187 file, cached = self._acquire_with_cache_info(needs_lock) 188 try: 189 yield file ~/.local/lib/python3.6/site-packages/xarray/backends/file_manager.py in _acquire_with_cache_info(self, needs_lock) 203 kwargs = kwargs.copy() 204 kwargs["mode"] = self._mode --> 205 file = self._opener(*self._args, **kwargs) 206 if self._mode == "w": 207 # ensure file doesn't get overriden when opened again netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.__init__() netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success() OSError: [Errno -51] NetCDF: Unknown file format: b'/project/umd_amit_tandon/iury/data/coldpools/modis/MOD06_L2/2015/001/MOD06_L2.A2015001.0000.061.2017318203346.hdf' ```

What you expected to happen:

Read the HDF file normally.

Minimal Complete Verifiable Example:

Just

xr.open_dataset('MOD06_L2.A2015012.1510.061.2017318235218.hdf')

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19) [GCC 7.2.0] python-bits: 64 OS: Linux OS-release: 2.6.32-754.35.1.el6.x86_64 machine: x86_64 processor: byteorder: little LC_ALL: C.UTF-8 LANG: C.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.6.3 xarray: 0.16.1 pandas: 1.1.4 numpy: 1.16.4 scipy: 1.2.0 netCDF4: 1.5.4 pydap: None h5netcdf: 0.8.1 h5py: 2.7.1 Nio: None zarr: None cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 0.16.1 distributed: 1.20.2 matplotlib: 3.2.2 cartopy: None seaborn: 0.8.1 numbagg: None pint: None setuptools: 38.4.0 pip: 20.2.4 conda: 4.4.10 pytest: 3.3.2 IPython: 6.2.1 sphinx: 1.6.6
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4814/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1592154849 I_kwDOAMm_X85e5lrh 7542 `OSError: [Errno -70] NetCDF: DAP server error` when `parallel=True` on a cluster iuryt 5797727 open 0     1 2023-02-20T16:27:11Z 2023-03-20T17:53:39Z   NONE      

What is your issue?

Hi,

I am trying to access MERRA-2 dataset using opendap links on xarray. The code below is based on a tutorial that @betolink sent me as an example.

The code runs well if parallel=False, but returns OSError: [Errno -70] NetCDF: DAP server error if I set parallel=True, no matter if I create the cluster or not.

@betolink suspected that the workers doesn’t know the authentication and suggested me to do something like mentioned in @rsignell issue.

Which would involve adding client.register_worker_plugin(UploadFile('~/.netrc')) after creating the client. I also tested that but returned the same error. In the code below I had to replace ~/.netrc for the full path because it was returning file not found error.

It is important to say that parallel=True works fine on my local computer using Ubuntu by WSL.

Has anyone faced this problem before or has any guesses on how to solve this issue?

```python

----------------------------------

Import Python modules

----------------------------------

import warnings

warnings.filterwarnings("ignore")

import xarray as xr import matplotlib.pyplot as plt from calendar import monthrange

create_cluster = True parallel = True upload_file = True

if create_cluster: # -------------------------------------- # Creating 50 workers with 1core and 2Gb each # -------------------------------------- import os from dask_jobqueue import SLURMCluster from dask.distributed import Client from dask.distributed import WorkerPlugin

class UploadFile(WorkerPlugin):
    """A WorkerPlugin to upload a local file to workers.
    Parameters
    ----------
    filepath: str
        A path to the file to upload
    Examples
    --------
    >>> client.register_worker_plugin(UploadFile(".env"))
    """
    def __init__(self, filepath):
        """
        Initialize the plugin by reading in the data from the given file.
        """

        self.filename = os.path.basename(filepath)
        self.dirname = os.path.dirname(filepath)
        with open(filepath, "rb") as f:
            self.data = f.read()

    async def setup(self, worker):
        if not os.path.exists(self.dirname):
            os.mkdir(self.dirname)
        os.chdir(self.dirname)
        with open(self.filename, "wb+") as f:
            f.write(self.data)
        return os.listdir()

cluster = SLURMCluster(cores=1, memory="40GB")
cluster.scale(jobs=10)

client = Client(cluster)  # Connect this local process to remote workers
if upload_file:
    client.register_worker_plugin(UploadFile('/home/isimoesdesousa/.netrc'))

---------------------------------

Read data

---------------------------------

MERRA-2 collection (hourly)

collection_shortname = 'M2T1NXAER' collection_longname = 'tavg1_2d_aer_Nx' collection_number = 'MERRA2_400'
MERRA2_version = '5.12.4' year = 2020

Open dataset

Read selected days in the same month and year

month = 1 # January day_beg = 1 day_end = 31

Note that collection_number is MERRA2_401 in a few cases, refer to "Records of MERRA-2 Data Reprocessing and Service Changes"

if year == 2020 and month == 9: collection_number = 'MERRA2_401'

OPeNDAP URL

url = 'https://goldsmr4.gesdisc.eosdis.nasa.gov/opendap/MERRA2/{}.{}/{}/{:0>2d}'.format(collection_shortname, MERRA2_version, year, month) files_month = ['{}/{}.{}.{}{:0>2d}{:0>2d}.nc4'.format(url,collection_number, collection_longname, year, month, days) for days in range(day_beg,day_end+1,1)]

Get the number of files

len_files_month=len(files_month)

Print

print("{} files to be opened:".format(len_files_month)) print("files_month", files_month)

Read dataset URLs

ds = xr.open_mfdataset(files_month, parallel=parallel)

View metadata (function like ncdump -c)

ds ```

As this deals with HPCs, I also posted on pangeo forum https://discourse.pangeo.io/t/access-ges-disc-nasa-dataset-using-xarray-and-dask-on-a-cluster/3195/1

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7542/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1521002414 I_kwDOAMm_X85aqKeu 7422 `plot.scatter` only works for declared arguments iuryt 5797727 closed 0     2 2023-01-05T16:15:28Z 2023-01-05T22:39:23Z 2023-01-05T22:39:23Z NONE      

What happened?

python ds.plot.scatter("x","y")

returns:

```python

KeyError Traceback (most recent call last) File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/core/dataset.py:1340, in Dataset._construct_dataarray(self, name) 1339 try: -> 1340 variable = self._variables[name] 1341 except KeyError:

KeyError: None

During handling of the above exception, another exception occurred:

KeyError Traceback (most recent call last) Cell In[60], line 1 ----> 1 ds.plot.scatter("x","y")

File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/plot/accessor.py:1071, in DatasetPlotAccessor.scatter(self, args, kwargs) 1069 @functools.wraps(dataset_plot.scatter) 1070 def scatter(self, args, kwargs) -> PathCollection | FacetGrid[DataArray]: -> 1071 return dataset_plot.scatter(self._ds, *args, kwargs)

File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/plot/dataset_plot.py:914, in scatter(ds, x, y, z, hue, hue_style, markersize, linewidth, figsize, size, aspect, ax, row, col, col_wrap, xincrease, yincrease, add_legend, add_colorbar, add_labels, add_title, subplot_kws, xscale, yscale, xticks, yticks, xlim, ylim, cmap, vmin, vmax, norm, extend, levels, args, kwargs) 912 del locals_["ds"] 913 locals_.update(locals_.pop("kwargs", {})) --> 914 da = temp_dataarray(ds, y, locals) 916 return da.plot.scatter(locals_.pop("args", ()), **locals_)

File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/plot/dataset_plot.py:740, in temp_dataarray(ds, y, locals) 736 coords[key] = ds[key] 738 # The dataarray has to include all the dims. Broadcast to that shape 739 # and add the additional coords: --> 740 _y = ds[y].broadcast_like(ds) 742 return DataArray(_y, coords=coords)

File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/core/dataset.py:1431, in Dataset.getitem(self, key) 1429 return self.isel(**key) 1430 if utils.hashable(key): -> 1431 return self._construct_dataarray(key) 1432 if utils.iterable_of_hashable(key): 1433 return self._copy_listed(key)

File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/core/dataset.py:1342, in Dataset.construct_dataarray(self, name) 1340 variable = self._variables[name] 1341 except KeyError: -> 1342 , name, variable = _get_virtual_variable(self._variables, name, self.dims) 1344 needed_dims = set(variable.dims) 1346 coords: dict[Hashable, Variable] = {}

File /autofs/nas1/home/isimoesdesousa/programs/mambaforge/envs/coringa/lib/python3.9/site-packages/xarray/core/dataset.py:174, in _get_virtual_variable(variables, key, dim_sizes) 171 return key, key, variable 173 if not isinstance(key, str): --> 174 raise KeyError(key) 176 split_key = key.split(".", 1) 177 if len(split_key) != 2:

KeyError: None ```

What did you expect to happen?

To plot the figure:

Minimal Complete Verifiable Example

```Python import pandas as pd

n = 1000 df = pd.DataFrame() df["x"] = np.random.randn(n) df["y"] = np.random.randn(n) ds = df.to_xarray()

this works

ds.plot.scatter(x="x",y="y")

this doesn't work

ds.plot.scatter("x","y") ```

MVCE confirmation

  • [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

No response

Environment

NSTALLED VERSIONS ------------------ commit: None python: 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03) [GCC 10.4.0] python-bits: 64 OS: Linux OS-release: 5.15.0-50-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.8.1 xarray: 2022.12.0 pandas: 1.5.2 numpy: 1.24.0 scipy: 1.9.3 netCDF4: 1.6.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.5 dask: 2022.12.1 distributed: 2022.12.1 matplotlib: 3.6.2 cartopy: None seaborn: None numbagg: None fsspec: 2022.11.0 cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 65.6.3 pip: 22.3.1 conda: None pytest: None mypy: None IPython: 8.7.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7422/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1340669247 I_kwDOAMm_X85P6P0_ 6922 Support for matplotlib mosaic using variable names iuryt 5797727 open 0     1 2022-08-16T17:31:23Z 2022-08-17T05:35:39Z   NONE      

Is your feature request related to a problem?

This is not related to any problem, but I think it would be nice to have a support for giving a matplotlib mosaic with the keys for the variables you want to plot for different panels and xarray parse that into the figure.

Describe the solution you'd like

Something like

```python import matplotlib.pyplot as plt import xarray as xr import numpy as np

n = 200 t = np.linspace(0,32np.pi,n) ds = xr.Dataset({letter:(("s","t"),np.sin(t)+0.5*np.random.randn(3,n)) for letter in "A B C D E".split()}) ds = ds.assign_coords(t=t,s=range(3))

mosaic = [ ["A","A","B","B","C","C"], ["X","D","D","E","E","X"], ]

kw = dict(x="t",hue="s",add_legend=False) ds.plot.line(mosaic=mosaic,empty_sentinel="X",**kw)

```

Describe alternatives you've considered

I have a code snippet that generate similar results but with more code.

```python import matplotlib.pyplot as plt import xarray as xr import numpy as np

n = 200 t = np.linspace(0,32np.pi,n) ds = xr.Dataset({letter:(("s","t"),np.sin(t)+0.5*np.random.randn(3,n)) for letter in "A B C D E".split()}) ds = ds.assign_coords(t=t,s=range(3))

mosaic = [ ["A","A","B","B","C","C"], ["X","D","D","E","E","X"], ]

kw = dict(x="t",hue="s",add_legend=False) fig = plt.figure(constrained_layout=True,figsize=(8,4)) ax = fig.subplot_mosaic(mosaic,empty_sentinel="X") for key in ds: ds[key].plot.line(ax=ax[key],**kw) ```

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6922/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
905452377 MDU6SXNzdWU5MDU0NTIzNzc= 5395 ValueError: unrecognized engine zarr must be one of: ['netcdf4', 'scipy', 'store'] iuryt 5797727 closed 0     6 2021-05-28T13:53:38Z 2021-12-07T19:44:04Z 2021-05-28T15:09:30Z NONE      

Hi,

I am trying to load MUR data from AWS on Google Colab just as Chelle did for her Pangeo tutorial. https://github.com/pangeo-gallery/osm2020tutorial/blob/master/AWS-notebooks/aws_mur_sst_tutorial_long.ipynb

What happened:

```python

ValueError Traceback (most recent call last)

<ipython-input-15-363886c4b27b> in <module>() 1 import xarray as xr 2 ----> 3 ds = xr.open_zarr('https://mur-sst.s3.us-west-2.amazonaws.com/zarr-v1',consolidated=True)

2 frames

/usr/local/lib/python3.7/dist-packages/xarray/backends/plugins.py in get_backend(engine) 133 if engine not in engines: 134 raise ValueError( --> 135 f"unrecognized engine {engine} must be one of: {list(engines)}" 136 ) 137 backend = engines[engine]

ValueError: unrecognized engine zarr must be one of: ['netcdf4', 'scipy', 'store'] ```

What you expected to happen:

Load Zarr data from S3.

Minimal Complete Verifiable Example:

python import xarray as xr ds = xr.open_zarr('https://mur-sst.s3.us-west-2.amazonaws.com/zarr-v1',consolidated=True)

Anything else we need to know?:

I am using Google Colab.

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.10 (default, May 3 2021, 02:48:31) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.4.109+ machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.18.2 pandas: 1.1.5 numpy: 1.19.5 scipy: 1.4.1 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: 3.1.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.12.0 distributed: 1.25.3 matplotlib: 3.2.2 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 56.1.0 pip: 19.3.1 conda: None pytest: 3.6.4 IPython: 5.5.0 sphinx: 1.8.5
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5395/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
782848816 MDU6SXNzdWU3ODI4NDg4MTY= 4787 Pass `locator` argument to matplotlib when calling `plot.contourf` iuryt 5797727 open 0     3 2021-01-10T16:04:26Z 2021-01-11T17:41:52Z   NONE      

Is your feature request related to a problem? Please describe. Everytime I have to do a contourf I need to call it from matplotlib directly because locator argument is not passed from xarray.plot.contourf to matplotlib.

Describe the solution you'd like Being ds a xarray.DataArray, I want to have the behaviour described here when passing locator = ticker.LogLocator() to ds.plot.contourf.

Describe alternatives you've considered I usually have to do plt.contourf(ds.dim_0,ds.dim_1,ds.values,locator = ticker.LogLocator()).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4787/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 28.047ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows