home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

26 rows where milestone = 650893, state = "closed" and type = "pull" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, comments, author_association, body, created_at (date), updated_at (date), closed_at (date)

type 1

  • pull · 26 ✖

state 1

  • closed · 26 ✖

repo 1

  • xarray 26
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
40231730 MDExOlB1bGxSZXF1ZXN0MTk3NzMyODE= 213 Checklist for v0.2.0 shoyer 1217238 closed 0   0.2 650893 0 2014-08-14T08:08:25Z 2014-08-14T17:20:05Z 2014-08-14T17:20:02Z MEMBER   0 pydata/xarray/pulls/213

Should resolve all remaining items in #183.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/213/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
35114453 MDExOlB1bGxSZXF1ZXN0MTY4MDIwMjA= 147 Support "None" as a variable name and use it as a default shoyer 1217238 closed 0   0.2 650893 0 2014-06-06T02:26:57Z 2014-08-14T07:44:27Z 2014-06-09T06:17:55Z MEMBER   0 pydata/xarray/pulls/147

This makes the xray API a little more similar to pandas, which makes heavy use of name = None for objects that can but don't always have names like Series and Index.

It will be a particular useful option to have around when we add a direct constructor for DataArray objects (#115). For now, arrays will probably only end up being named None if they are the result of some mathematical operation where the name could be ambiguous.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/147/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39162573 MDExOlB1bGxSZXF1ZXN0MTkxMzI3MzY= 194 Consistently use shorter names: always use 'attrs', 'coords' and 'dims' shoyer 1217238 closed 0   0.2 650893 0 2014-07-31T05:11:12Z 2014-08-14T05:08:01Z 2014-08-14T05:07:58Z MEMBER   0 pydata/xarray/pulls/194

Cleaned up a few cases where attributes was used instead of attrs in function signatures.

Fixes: #190 - [x] Switch names in xray itself - [x] Switch names in tests - [x] Switch names in documentation

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/194/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39768388 MDExOlB1bGxSZXF1ZXN0MTk0OTQ1OTc= 207 Raise an error when attempting to use a scalar variable as a dimension shoyer 1217238 closed 0   0.2 650893 0 2014-08-07T21:07:03Z 2014-08-07T21:13:12Z 2014-08-07T21:13:02Z MEMBER   0 pydata/xarray/pulls/207

If 'x' was a scalar variable in a dataset and you set a new variable with 'x' as a dimension, you could end up with a broken Dataset object.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/207/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39384616 MDExOlB1bGxSZXF1ZXN0MTkyNjE4MTc= 201 Fix renaming in-place bug with virtual variables shoyer 1217238 closed 0   0.2 650893 0 2014-08-04T01:20:06Z 2014-08-04T01:24:32Z 2014-08-04T01:22:58Z MEMBER   0 pydata/xarray/pulls/201

This is why mutating state is a bad idea.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/201/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39354137 MDExOlB1bGxSZXF1ZXN0MTkyNDgzMDg= 198 Cleanup of DataArray constructor / Dataset.__getitem__ shoyer 1217238 closed 0   0.2 650893 0 2014-08-02T18:12:36Z 2014-08-02T18:28:54Z 2014-08-02T18:28:52Z MEMBER   0 pydata/xarray/pulls/198

Now Dataset.getitem raises a KeyError when it can't find a variable.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/198/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
33463636 MDExOlB1bGxSZXF1ZXN0MTU4NjIwNDQ= 128 Expose more information in DataArray.__repr__ shoyer 1217238 closed 0   0.2 650893 0 2014-05-14T06:05:53Z 2014-08-01T05:54:50Z 2014-05-29T04:19:46Z MEMBER   0 pydata/xarray/pulls/128

This PR changes the DataArray representation so that it displays more of the information associated with a data array: - "Coordinates" are indicated by their name and the repr of the corresponding pandas.Index object (to indicate how they are used as indices). - "Linked" dataset variables are also listed. - These are other variables in the dataset associated with a DataArray which are also indexed along with the DataArray. - They accessible from the dataset attribute or by indexing the data array with a string. - Perhaps their most convenient aspect is that they enable groupby operations by name for DataArray objets. - This is an admitedly somewhat confusing (though convenient) notion that I am considering removing, but we if we don't remove them we should certainly expose their existence more clearly, given the potential benefits in expressiveness and costs in performance.

Questions to resolve: - Is "Linked dataset variables" the best name for these? - Perhaps it would be useful to show more information about these linked variables, such as their dimensions and/or shape?

Examples of the new repr are on nbviewer: http://nbviewer.ipython.org/gist/shoyer/94936e5b71613683d95a

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/128/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39167256 MDExOlB1bGxSZXF1ZXN0MTkxMzUxNTk= 196 Raise NotImplementedError when attempting to use a pandas.MultiIndex shoyer 1217238 closed 0   0.2 650893 0 2014-07-31T06:53:04Z 2014-07-31T07:00:43Z 2014-07-31T07:00:40Z MEMBER   0 pydata/xarray/pulls/196

Related: #164

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/196/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39163624 MDExOlB1bGxSZXF1ZXN0MTkxMzMzNjg= 195 .loc and .sel support indexing with boolean arrays shoyer 1217238 closed 0   0.2 650893 0 2014-07-31T05:41:09Z 2014-07-31T06:52:43Z 2014-07-31T06:52:41Z MEMBER   0 pydata/xarray/pulls/195

Fixes #182

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/195/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
38857041 MDExOlB1bGxSZXF1ZXN0MTg5NDczNTA= 192 Enhanced support for modifying Dataset & DataArray properties in place shoyer 1217238 closed 0   0.2 650893 0 2014-07-28T02:14:00Z 2014-07-31T04:46:19Z 2014-07-31T04:46:16Z MEMBER   0 pydata/xarray/pulls/192

With this patch, it is possible to perform the following operations: - data_array.name = 'foo' - data_array.coordinates = ... - data_array.coordinates[0] = ... - data_array.coordinates['x'] = ... - dataset.coordinates['x'] = ... - dataset.rename(..., inplace=True)

It is no longer possible to set data_array.variable = ...., which was technically part of the public API but I would guess unused.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/192/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
38700243 MDExOlB1bGxSZXF1ZXN0MTg4Nzk3OTY= 189 Implementation of Dataset.apply method shoyer 1217238 closed 0   0.2 650893 0 2014-07-25T06:18:29Z 2014-07-31T04:45:29Z 2014-07-31T04:45:29Z MEMBER   0 pydata/xarray/pulls/189

Fixes #140

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/189/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
38502970 MDExOlB1bGxSZXF1ZXN0MTg3NTk1NTA= 188 Dataset context manager and close() method shoyer 1217238 closed 0   0.2 650893 1 2014-07-23T07:03:49Z 2014-07-29T19:47:46Z 2014-07-29T19:44:30Z MEMBER   0 pydata/xarray/pulls/188

With this PR, it is possible to close the data store from which a dataset was loaded via ds.close() or automatically when a dataset is used with a context manager:

python with xray.open_dataset('data.nc') as ds: ...

The ability to cleanly close files opened from disk is pretty essential -- we probably should have had this a while ago. It should not be necessary to use the low-level/unstable datastore API to get this functionality.

Implementation question: With this current implementation, calling ds.close() on (and using a context manager with) a dataset not linked to any file objects is a no-op. Should we raise an exception instead? Something like IOError('no file object to close')?

CC @ToddSmall

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/188/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
33852501 MDExOlB1bGxSZXF1ZXN0MTYwODU4Mzg= 137 Dataset.reduce methods jhamman 2443309 closed 0   0.2 650893 6 2014-05-20T01:53:30Z 2014-07-25T06:37:31Z 2014-05-21T20:23:36Z MEMBER   0 pydata/xarray/pulls/137

A first attempt at implementing Dataset reduction methods.

131

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/137/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36908559 MDExOlB1bGxSZXF1ZXN0MTc4NDAyNDE= 177 Add python2.6 compatibility aykuznetsova 3344007 closed 0   0.2 650893 1 2014-07-01T16:19:21Z 2014-07-01T21:30:08Z 2014-07-01T19:57:30Z NONE   0 pydata/xarray/pulls/177

This change mainly involves an alternative import of OrderedDict, modified dict and set comprehensions, and using unittest2 for testing.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/177/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
34810536 MDExOlB1bGxSZXF1ZXN0MTY2MjIxMDA= 144 Use "equivalence" for all dictionary equality checks shoyer 1217238 closed 0   0.2 650893 0 2014-06-02T21:01:35Z 2014-06-25T23:40:36Z 2014-06-02T21:20:15Z MEMBER   0 pydata/xarray/pulls/144

This should fix a bug @mgarvert encountered with concatenating variables with different array attributes.

In the process of fixing this issue, I encountered and fixed another bug with utils.remove_incompatible_items.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/144/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36453574 MDExOlB1bGxSZXF1ZXN0MTc1NzQ3MjY= 174 Add isnull and notnull (wrapping pandas) shoyer 1217238 closed 0   0.2 650893 0 2014-06-25T07:07:42Z 2014-06-25T07:37:36Z 2014-06-25T07:37:35Z MEMBER   0 pydata/xarray/pulls/174
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/174/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36354990 MDExOlB1bGxSZXF1ZXN0MTc1MTM3NTk= 173 Edge cases shoyer 1217238 closed 0   0.2 650893 0 2014-06-24T05:34:05Z 2014-06-24T17:55:16Z 2014-06-24T17:55:14Z MEMBER   0 pydata/xarray/pulls/173
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/173/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36354140 MDExOlB1bGxSZXF1ZXN0MTc1MTMyNjY= 172 {DataArray,Dataset}.indexes no longer creates a new dict shoyer 1217238 closed 0   0.2 650893 0 2014-06-24T05:10:25Z 2014-06-24T05:34:38Z 2014-06-24T05:34:36Z MEMBER   0 pydata/xarray/pulls/172

According to the toy benchmark below, this shaves off between 20% (diff-indexes) to 40% (same-indexes) of xray's overhead for array math:

import numpy as np import xray x = np.random.randn(1000, 1000) y = np.random.randn(1000, 1000) dx = xray.DataArray(x) dy = xray.DataArray(y) %timeit x + x # raw-numpy %timeit dx + dx # same-indexes %timeit dx + dy # diff-indexes

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/172/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36240022 MDExOlB1bGxSZXF1ZXN0MTc0NDY2Njc= 171 Implementation of DatasetGroupBy summary methods shoyer 1217238 closed 0   0.2 650893 0 2014-06-22T08:38:51Z 2014-06-23T07:25:10Z 2014-06-23T07:25:08Z MEMBER   0 pydata/xarray/pulls/171

You can now do ds.groupby('time.month').mean() to apply the mean over all groups and variables in a dataset.

It is not optimized like the DataArray.groupby summary methods but it should work.

Thanks @jhamman for laying the groundwork for this!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/171/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36238726 MDExOlB1bGxSZXF1ZXN0MTc0NDYwNjY= 169 Cleanups shoyer 1217238 closed 0   0.2 650893 0 2014-06-22T06:44:17Z 2014-06-22T06:56:22Z 2014-06-22T06:56:20Z MEMBER   0 pydata/xarray/pulls/169
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/169/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
35684756 MDExOlB1bGxSZXF1ZXN0MTcxMTc1NjY= 161 Rename "Coordinate", "labeled" and "indexed" shoyer 1217238 closed 0   0.2 650893 1 2014-06-13T16:07:40Z 2014-06-22T00:44:28Z 2014-06-22T00:44:26Z MEMBER   0 pydata/xarray/pulls/161

Fixes #142 Fixes #148

All existing code should still work but issue a FutureWarning if any of the old names are used.

Full list of updates:

| Old | New | | --- | --- | | Coordinate | Index | | coordinates | indexes | | noncoordinates | nonindexes | | indexed | isel | | labeled | sel | | select | select_vars | | unselect | drop_vars |

Most of these are both Dataset and DataArray methods/properties.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/161/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
35965115 MDExOlB1bGxSZXF1ZXN0MTcyODEzODQ= 165 WIP: cleanup conventions.encode_cf_variable shoyer 1217238 closed 0   0.2 650893 0 2014-06-18T08:47:35Z 2014-06-22T00:36:01Z 2014-06-22T00:35:42Z MEMBER   0 pydata/xarray/pulls/165

Almost ready, except for failing tests on Python 3.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/165/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
36017422 MDExOlB1bGxSZXF1ZXN0MTczMTI5MDQ= 166 Revert using __slots__ for Mapping subclasses in xray.utils shoyer 1217238 closed 0   0.2 650893 1 2014-06-18T19:08:47Z 2014-06-18T19:24:50Z 2014-06-18T19:12:52Z MEMBER   0 pydata/xarray/pulls/166

This recently added some complexity for a very nominal speed benefit. And it appears that it breaks joblib serialization, somehow (even though pickle works). So for now, revert it -- and consider filing a joblib bug if we can narrow it down.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/166/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
35762823 MDExOlB1bGxSZXF1ZXN0MTcxNTgzOTg= 163 BUG: fix encoding issues (array indexing now resets encoding) shoyer 1217238 closed 0   0.2 650893 4 2014-06-16T01:29:22Z 2014-06-17T07:28:45Z 2014-06-16T04:52:43Z MEMBER   0 pydata/xarray/pulls/163

Fixes #156, #157

To elaborate on the changes: 1. When an array is indexed, its encoding will be reset. This takes care of the invalid chunksize issue. More generally, this seems like the right choice because it's not clear that the right encoding will be the same after slicing an array, anyways. 2. If an array has encoding['dtype'] = np.dtype('S1') (e.g., it was originally encoded in characters), it will be stacked up to be saved as a character array, even if it's being saved to a NetCDF4 file. Previously, the array would be cast to 'S1' without stacking, which would result in silent loss of data.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/163/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
35263258 MDExOlB1bGxSZXF1ZXN0MTY4NzMwNTA= 149 Data array constructor shoyer 1217238 closed 0   0.2 650893 1 2014-06-09T06:29:49Z 2014-06-12T20:38:27Z 2014-06-11T16:53:58Z MEMBER   0 pydata/xarray/pulls/149

Fixes #115.

Related: #116, #117.

Note: a remaining major task will be to rewrite/reorganize the docs to introduce DataArray first, entirely independently of Dataset. This will make it easier for new users to figure out how to get started with xray, since DataArray is much simpler.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/149/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
35304758 MDExOlB1bGxSZXF1ZXN0MTY4OTY2MjM= 150 Fix DecodedCFDatetimeArray was being incorrectly indexed. akleeman 514053 closed 0   0.2 650893 0 2014-06-09T17:25:05Z 2014-06-09T17:43:50Z 2014-06-09T17:43:50Z CONTRIBUTOR   0 pydata/xarray/pulls/150

This was causing an error in the following situation:

ds = xray.Dataset() ds['time'] = ('time', [np.datetime64('2001-05-01') for i in range(5)]) ds['variable'] = ('time', np.arange(5.)) ds.to_netcdf('test.nc') ds = xray.open_dataset('./test.nc') ss = ds.indexed(time=slice(0, 2)) ss.dumps()

Thanks @shoyer for the fix.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/150/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 22.902ms · About: xarray-datasette