home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

21 rows where repo = 13221727, state = "closed" and user = 306380 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 13
  • pull 8

state 1

  • closed · 21 ✖

repo 1

  • xarray · 21 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
295270362 MDU6SXNzdWUyOTUyNzAzNjI= 1895 Avoid Adapters in task graphs? mrocklin 306380 closed 0     13 2018-02-07T19:52:02Z 2022-05-11T20:26:42Z 2022-05-11T20:26:42Z MEMBER      

Looking at an open_zarr computation from @rabernat I'm coming across intermediate values like the following:

```python

Future('zarr-adt-0f90b3f56f247f966e5ef01277f31374').result() ImplicitToExplicitIndexingAdapter(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x7fa921fec278>, key=BasicIndexer((slice(None, None, None), slice(None, None, None), slice(None, None, None))))) ```

This object has many dependents, and so will presumably have to float around the network to all of the workers

```python

len(dep.dependents) 1781 ```

In principle this is fine, especially if this object is cheap to serialize, move, and deserialize. It does introduce a bit of friction though. I'm curious how hard it would be to build task graphs that generated these objects on the fly, or else removed them altogether. It is slightly more convenient from a task scheduling perspective for data access tasks to not have any dependencies.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1895/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
336371511 MDExOlB1bGxSZXF1ZXN0MTk3ODQwODgw 2255 Add automatic chunking to open_rasterio mrocklin 306380 closed 0     10 2018-06-27T20:15:07Z 2022-04-07T20:21:24Z 2022-04-07T20:21:24Z MEMBER   0 pydata/xarray/pulls/2255

This uses the automatic chunking in dask 0.18+ to chunk rasterio datasets in a nicely aligned way.

Currently this doesn't implement tests due to a difficulty in creating chunked tiff images.

This also uncovered some inefficiencies in how Dask doesn't align rechunking to existing chunk schemes.

  • [x] Closes #2093
  • [ ] Tests added (for all bug fixes or enhancements)
  • [ ] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

I could use help on how the following:

  • How to create tiled TIFF files in the tests
  • The right way to merge different dtypes and block shapes in the tiff file. Currently I'm assuming that they're uniform
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2255/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
318950038 MDU6SXNzdWUzMTg5NTAwMzg= 2093 Default chunking in GeoTIFF images mrocklin 306380 closed 0     10 2018-04-30T16:21:30Z 2020-06-18T06:27:07Z 2020-06-18T06:27:07Z MEMBER      

Given a tiled GeoTIFF image I'm looking for the best practice in reading it as a chunked dataset. I did this in this notebook by first opening the file with rasterio, looking at the block sizes, and then using those to inform the argument to chunks= in xarray.open_rasterio. This works, but is somewhat cumbersome because I also had to dive into the rasterio API. Do we want to provide defaults here?

In dask.array every time this has come up we've always shot it down, automatic chunking is error prone and hard to do well. However in these cases the object we're being given usually also conveys its chunking in a way that matches how dask.array thinks about it, so the extra cognitive load on the user has been somewhat low. Rasterio's model and API feel much more foreign to me though than a project like NetCDF or H5Py. I find myself wanting a chunks=True or chunks='100MB' option.

Thoughts on this? Is this in-scope? If so then what is the right API and what is the right policy for how to make xarray/dask.array chunks larger than GeoTIFF chunks?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2093/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
400948664 MDU6SXNzdWU0MDA5NDg2NjQ= 2692 Xarray tutorial at SciPy 2019? mrocklin 306380 closed 0     10 2019-01-19T01:56:38Z 2020-03-25T04:34:27Z 2019-02-17T05:07:45Z MEMBER      

Is anyone interested in submitting a tutorial to SciPy 2019? I think that it would be useful to have an official Xarray tutorial out there somewhere on the internet. This could be good motivation to create one.

https://www.scipy2019.scipy.org/tutorials

See also: https://github.com/pydata/xarray/issues/1882

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2692/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
287969295 MDU6SXNzdWUyODc5NjkyOTU= 1822 Use apply_ufunc in xESMF regridding package mrocklin 306380 closed 0     4 2018-01-12T00:17:04Z 2020-01-15T00:01:49Z 2020-01-15T00:01:49Z MEMBER      

I would like to call attention to https://github.com/JiaweiZhuang/xESMF/issues/3#issuecomment-354668897 . It seems like the xESMF package does regridding in a way that at least some XArray users find sensible. It should probably make use of, but does not currently use apply_ufunc, and is not particularly parallelizable (or at least that is my understanding). It could be that some modest development by someone more familiar with XArray could have a large impact by properly using apply_ufunc within that codebase.

I apologize for posting an issue about another package in this issue tracker. Feel free to close.

cc @JiaweiZhuang

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1822/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
221858543 MDU6SXNzdWUyMjE4NTg1NDM= 1375 Sparse arrays mrocklin 306380 closed 0     25 2017-04-14T18:00:14Z 2019-08-30T02:36:12Z 2019-08-13T03:31:14Z MEMBER      

I would like to have an XArray that has scipy.sparse arrays rather than numpy arrays. Is this in scope?

What would need to happen within XArray to support this?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1375/reactions",
    "total_count": 8,
    "+1": 8,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
323785231 MDU6SXNzdWUzMjM3ODUyMzE= 2143 Upstream changes in Dask mrocklin 306380 closed 0     1 2018-05-16T21:01:21Z 2019-08-15T15:16:54Z 2019-08-15T15:16:54Z MEMBER      

Hi All,

There are a couple changes coming in dask that might affect XArray code:

  1. We're replacing the get=dask.threaded.get keyword with scheduler='threads'
  2. We're replacing dask.set_options(...) with dask.config.set(...)

Both of the old systems will still work, at least for a version or two, but we plan to remove them in the future. I thought I'd bring these changes up here so that we can plan a clean deprecation within XArray. These are also both not yet released, so both features are still up for discussion if this community has additional constraints.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2143/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
355308699 MDU6SXNzdWUzNTUzMDg2OTk= 2390 Why are there two compute calls for plot? mrocklin 306380 closed 0     3 2018-08-29T19:53:45Z 2019-08-04T23:00:59Z 2019-08-04T23:00:59Z MEMBER      

Anecdotally I find that when I call .plot() on a dataset object that holds dask arrays compute gets called twice. Why is this? I'm curious if this is something that should be resolved.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2390/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
390443869 MDExOlB1bGxSZXF1ZXN0MjM4MjA5ODI1 2603 Support HighLevelGraphs mrocklin 306380 closed 0     2 2018-12-12T22:52:28Z 2018-12-13T17:13:10Z 2018-12-13T17:13:00Z MEMBER   0 pydata/xarray/pulls/2603

Fixes https://github.com/dask/dask/issues/4291

  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2603/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
372640063 MDExOlB1bGxSZXF1ZXN0MjI0NzY5Mjg1 2500 Avoid use of deprecated get= parameter in tests mrocklin 306380 closed 0     7 2018-10-22T18:25:58Z 2018-10-23T10:31:37Z 2018-10-23T00:22:51Z MEMBER   0 pydata/xarray/pulls/2500
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2500/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
282178751 MDU6SXNzdWUyODIxNzg3NTE= 1784 Add compute=False keywords to `to_foo` functions mrocklin 306380 closed 0     9 2017-12-14T17:25:19Z 2018-05-16T15:05:03Z 2018-05-16T15:05:03Z MEMBER      

When working with @jhamman profiling the to_zarr method on large datasets I wanted the ability to run through the to_zarr setup code, but avoid waiting on the dask computation to finish. In many functions in Dask proper our to_foo methods have a compute=False keyword that returns a dask.delayed object on which people can call compute later if desired.

cc @jhamman @rabernat @jakirkham (who has been looking at similar questions within dask.array.Array.store)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1784/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
295146502 MDU6SXNzdWUyOTUxNDY1MDI= 1894 Zarr keys include variable name mrocklin 306380 closed 0     1 2018-02-07T13:56:32Z 2018-02-17T04:40:15Z 2018-02-17T04:40:15Z MEMBER      

When using open_zarr on a dataset with many variables the keynames include the variable name, like

('zarr-temperature-1234', 1, 3, 2)

In the distributed scheduler these keynames get shortened to prefixes like zarr-temperature and used both for scheduling heuristics (all keys with the same prefix are expected to take similar-ish amounts of time) and for diagnostics, such as in the progress bar plot below:

We may want to avoid including the variable name into the keyname here in order to avoid breaking these out into several groups. Instead you might consider putting the variable name within the key as another member of the tuple like the following:

('zarr-1234', 'temperature', 1, 3 ,2)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1894/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
296434660 MDExOlB1bGxSZXF1ZXN0MTY4NjIyOTM3 1904 Replace task_state with tasks in dask test mrocklin 306380 closed 0     4 2018-02-12T16:19:14Z 2018-02-12T21:08:06Z 2018-02-12T21:08:06Z MEMBER   0 pydata/xarray/pulls/1904

This internal state was changed in the latest release

  • [x] Closes #1903 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [ ] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1904/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
276448264 MDExOlB1bGxSZXF1ZXN0MTU0NDI5ODYz 1741 Auto flake mrocklin 306380 closed 0     2 2017-11-23T18:00:47Z 2018-01-14T20:49:20Z 2018-01-14T20:49:20Z MEMBER   0 pydata/xarray/pulls/1741
  • [ ] Closes #xxxx
  • [x] Tests added / passed
  • [ ] Passes flake8 xarray
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

I had a free half hour so I decided to run autoflake and autopep8 tools on the codebase. flake8 xarray passes. I copied over the exclusions that we use within dask/distributed and extended the line length to 120. You may wish to review these decisions.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1741/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
286448591 MDU6SXNzdWUyODY0NDg1OTE= 1810 data_array.<tab> reads data mrocklin 306380 closed 0     4 2018-01-06T01:34:55Z 2018-01-06T14:26:36Z 2018-01-06T14:26:36Z MEMBER      

Code Sample, a copy-pastable example if possible

python ds = xarray.open_dataset(...) da = ds.variables['...'] da.<tab>

Problem description

This starts reading data. I don't know why. I'm using XArray against a FUSE system that is both expensive (it's targetting Google Cloud Storage) and also has logging. I can see that auto-completion immediately starts a lot of file reading on the file system.

Output of xr.show_versions()

```python # Paste the output here xr.show_versions() here >>> xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.3.final.0 python-bits: 64 OS: Linux OS-release: 4.10.0-42-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0 pandas: 0.21.0 numpy: 1.13.3 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: 0.5.0 Nio: None bottleneck: 1.2.1 cyordereddict: None dask: 0.16.0 matplotlib: None cartopy: None seaborn: None setuptools: 36.6.0 pip: 9.0.1 conda: 4.3.29 pytest: 3.3.1 IPython: 6.2.1 sphinx: None ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1810/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
279198672 MDExOlB1bGxSZXF1ZXN0MTU2MzQwNDk3 1760 Fix DataArray.__dask_scheduler__ to point to dask.threaded.get mrocklin 306380 closed 0     8 2017-12-05T00:12:21Z 2017-12-07T22:13:42Z 2017-12-07T22:09:18Z MEMBER   0 pydata/xarray/pulls/1760

Previously this erroneously pointed to an optimize function, likely a copy-paste error.

For testing this also redirects the .compute methods to use the dask.compute function directly if dask.version >= '0.16.0'.

Closes #1759

  • [x] Closes #xxxx (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Tests added (for all bug fixes or enhancements)
  • [x] Tests passed (for all non-documentation changes)
  • [x] Passes git diff upstream/master **/*py | flake8 --diff (remove if you did not edit any Python files)
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1760/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
269886091 MDExOlB1bGxSZXF1ZXN0MTQ5NzMxMDM5 1674 Support Dask interface mrocklin 306380 closed 0     12 2017-10-31T09:15:52Z 2017-11-07T18:37:06Z 2017-11-07T18:31:45Z MEMBER   0 pydata/xarray/pulls/1674

This integrates the new dask interface methods into XArray. This will place XArray as a first-class dask collection and help in particular with newer dask.distributed features.

  • [x] Closes https://github.com/pangeo-data/pangeo/issues/5
  • [x] Tests added / passed
  • [x] Passes git diff upstream/master **/*py | flake8 --diff
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Builds on work from @jcrist here: https://github.com/dask/dask/pull/2748 Depends on https://github.com/dask/dask/pull/2847

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1674/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
218314868 MDU6SXNzdWUyMTgzMTQ4Njg= 1343 Some XArray key names don't group nicely mrocklin 306380 closed 0     2 2017-03-30T20:15:44Z 2017-05-22T20:38:56Z 2017-05-22T20:38:56Z MEMBER      

Some XArray loading functions provide keys that don't adhere to dask conventions used for naming.

We can solve this in XArray by using names like 'load-' + dask.base.tokenize(stuff) or in dask by trying to identify and avoid names like these. It might be wise to attempt both. I expect that this will be easier to solve in XArray (though that's also in my own self interest :))

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1343/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
218942553 MDExOlB1bGxSZXF1ZXN0MTEzOTM2OTk3 1349 Add persist method to DataSet mrocklin 306380 closed 0     10 2017-04-03T13:59:02Z 2017-04-04T16:19:20Z 2017-04-04T16:14:17Z MEMBER   0 pydata/xarray/pulls/1349

Fixes https://github.com/pydata/xarray/issues/1344

  • [x] closes #xxxx
  • [x] tests added / passed
  • [x] passes git diff upstream/master | flake8 --diff
  • [x] whatsnew entry (not sure what to do here, is there a new section? It looks like the last release was yesterday)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1349/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
218315793 MDU6SXNzdWUyMTgzMTU3OTM= 1344 Dask Persist mrocklin 306380 closed 0     5 2017-03-30T20:19:17Z 2017-04-04T16:14:17Z 2017-04-04T16:14:17Z MEMBER      

It would be convenient to load constituent dask.arrays into memory as dask.arrays rather than as numpy arrays. This would help with distributed computations where we want to load a large amount of data into distributed memory once and then iterate on the full xarray dataset repeatedly without reloading from disk every time.

We can probably solve this from either side:

  1. XArray could make a .persist method that replaced all of its dask.arrays with a persisted version of that array

```python import dask

dset.x, dset.y, dset.z = dask.persist(dset.x, dset.y, dset.z) ```

  1. We could look into the Dask duck type solution again https://github.com/dask/dask/pull/1068

cc @shoyer @jcrist @rabernat @pwolfram

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1344/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
187594293 MDU6SXNzdWUxODc1OTQyOTM= 1085 Always use absolute paths mrocklin 306380 closed 0     3 2016-11-06T22:25:08Z 2016-12-01T16:47:40Z 2016-12-01T16:47:40Z MEMBER      

This would avoid a mismatch between clients and workers when using dask.distributed

python In [2]: os.path.abspath('my-local-path') Out[2]: '/home/mrocklin/my-local-path'

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1085/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 398.074ms · About: xarray-datasette