home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

9 rows where state = "closed", type = "issue" and user = 12237157 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 9 ✖

state 1

  • closed · 9 ✖

repo 1

  • xarray 9
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
779392905 MDU6SXNzdWU3NzkzOTI5MDU= 4768 weighted for xr.corr aaronspring 12237157 closed 0     2 2021-01-05T18:24:29Z 2023-12-12T00:24:22Z 2023-12-12T00:24:22Z CONTRIBUTOR      

Is your feature request related to a problem? Please describe. I want to make weighted correlation, e.g. spatial correlation but weighted xr.corr(fct,obs,dim=['lon','lat'], weights=np.cos(np.abs(fct.lat))) So far, xr.corr does not accept weights or input.weighted(weights). A more straightforward case would be weighting of different members: xr.corr(fct,obs,dim='member',weights=np.arange(fct.member.size))

Describe the solution you'd like We started xskillscore https://github.com/xarray-contrib/xskillscore some time ago, before xr.corr was implemented and have keywords weighted, skipna and keep_attrs implemented. We also have xs.rmse, xs.mse, ... implemented via xr.apply_ufunc https://github.com/aaronspring/xskillscore/blob/150f7b9b2360750e6077036c7c3fd6e4439c60b6/xskillscore/core/deterministic.py#L849 which are faster than xr-based versions of mse https://github.com/aaronspring/xskillscore/blob/150f7b9b2360750e6077036c7c3fd6e4439c60b6/xskillscore/xr/deterministic.py#L6 or xr.corr, see https://github.com/xarray-contrib/xskillscore/pull/231

Additional context My question here is whether it would be better to move these xskillscore metrics upward into xarray or start a PR for weighted and skipna for xr.corr (what I prefer).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4768/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1471561942 I_kwDOAMm_X85XtkDW 7342 `xr.DataArray.plot.pcolormesh(robust="col/row")` aaronspring 12237157 closed 0     3 2022-12-01T16:01:27Z 2022-12-12T12:17:45Z 2022-12-12T12:17:45Z CONTRIBUTOR      

Is your feature request related to a problem?

I often want to get a quick view from multi-dimensional data from an xr.Dataset with multiple variables at once in a one-liner. I really like the robust=True feature and think it could also allow "col" and "row" to be robust only across columns or rows.

Describe the solution you'd like

python ds = xr.tutorial.load_dataset("eraint_uvz") ds.mean("month").to_array().plot(col="level", row="variable", robust="row") What I get and do not like because it apply robust either to all data or nothing:

What I would like to see, see below in alternative what I always do

Describe alternatives you've considered

python ds = xr.tutorial.load_dataset("eraint_uvz") for v in ds.data_vars: ds[v].mean("month").plot(col="level", robust=True) plt.show()

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7342/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1071049280 I_kwDOAMm_X84_1upA 6045 `xr.infer_freq` month bug for `freq='6MS'` starting Jan becomes `freq='2QS-OCT'` aaronspring 12237157 closed 0     3 2021-12-03T23:36:56Z 2022-06-24T22:58:47Z 2022-06-24T22:58:47Z CONTRIBUTOR      

What happened:

@dougiesquire brought up https://github.com/pangeo-data/climpred/issues/698. During debugging I discovered unexpected behaviour in xr.infer_freq: freq='6MS' starting Jan becomes freq='2QS-OCT'

What you expected to happen: freq='6MS' starting Jan becomes freq='2QS-Jan'

Minimal Complete Verifiable Example:

Creating an 6MS index starting in Jan with pandas and xarray yields different freq. 2QS and 6MS are equivalent for quarter starting months but the month offset in CFTimeIndex.freq is wrong.

```python import pandas as pd i_pd = pd.date_range(start="2000-01-01", end="2002-01-01", freq="6MS") i_pd DatetimeIndex(['2000-01-01', '2000-07-01', '2001-01-01', '2001-07-01', '2002-01-01'], dtype='datetime64[ns]', freq='6MS')

pd.infer_freq(i_pd) '2QS-OCT'

import xarray as xr xr.cftime_range(start="2000-01-01", end="2002-01-01", freq="6MS")

CFTimeIndex([2000-01-01 00:00:00, 2000-07-01 00:00:00, 2001-01-01 00:00:00, 2001-07-01 00:00:00, 2002-01-01 00:00:00], dtype='object', length=5, calendar='gregorian', freq='2QS-OCT') ```

Anything else we need to know?:

outline how to solve: https://github.com/pangeo-data/climpred/issues/698#issuecomment-985899966

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6045/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1092867975 I_kwDOAMm_X85BI9eH 6134 [FEATURE]: `CFTimeIndex.shift(float)` aaronspring 12237157 closed 0     1 2022-01-03T22:33:58Z 2022-02-15T23:05:04Z 2022-02-15T23:05:04Z CONTRIBUTOR      

Is your feature request related to a problem?

CFTimeIndex.shift() allows only int but sometimes I'd like to shift by a float e.g. 0.5.

For small freqs, that shouldnt be a problem as pd.Timedelta allows floats for days and below. For freqs of months and larger, it becomes more tricky. Fractional shifts work for calendar=360 easily, for other calendars thats not possible.

Describe the solution you'd like

CFTimeIndex.shift(0.5, 'D') CFTimeIndex.shift(0.5, 'M') for 360day calendar CFTimeIndex.shift(0.5, 'M') for other calendars fails

Describe alternatives you've considered

solution we have in climpred: https://github.com/pangeo-data/climpred/blob/617223b5bea23a094065efe46afeeafe9796fa97/climpred/utils.py#L657

Additional context

https://xarray.pydata.org/en/stable/generated/xarray.CFTimeIndex.shift.html

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6134/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
707223289 MDU6SXNzdWU3MDcyMjMyODk= 4451 xr.open_dataset(remote_url) file not found aaronspring 12237157 closed 0     1 2020-09-23T10:00:54Z 2020-09-23T12:03:37Z 2020-09-23T12:03:37Z CONTRIBUTOR      

What happened:

I tried to open a remote url and got OSError, but !wget url works

What you expected to happen:

open the remote netcdf file

Minimal Complete Verifiable Example:

```python from netCDF4 import Dataset

import netCDF4 netCDF4.version

import xarray as xr xr.version

url='https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc'

working_url='https://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_20200923_0000.grib2'

xr.open_dataset(url) ... netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.init()

netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success()

OSError: [Errno -90] NetCDF: file not found: b'https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc'

seems to be netcdf4 upstream issue

Dataset(url)

OSError Traceback (most recent call last) <ipython-input-14-265839034cee> in <module> ----> 1 Dataset(url)

netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Dataset.init()

netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success()

OSError: [Errno -90] NetCDF: file not found: b'https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc' ```

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 2.6.32-754.29.2.el6.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.6.2 xarray: 0.16.1 pandas: 1.1.2 numpy: 1.19.1 scipy: 1.5.2 netCDF4: 1.5.1.2 pydap: installed h5netcdf: 0.8.0 h5py: 2.10.0 Nio: 1.5.5 zarr: 2.4.0 cftime: 1.2.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: 1.1.0 cfgrib: 0.9.7.6 iris: 2.2.0 bottleneck: 1.3.1 dask: 2.15.0 distributed: 2.20.0 matplotlib: 3.1.2 cartopy: 0.17.0 seaborn: 0.10.1 numbagg: None pint: 0.11 setuptools: 47.1.1.post20200529 pip: 20.2.3 conda: None pytest: 5.3.5 IPython: 7.15.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4451/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
668717850 MDU6SXNzdWU2Njg3MTc4NTA= 4290 bool(Dataset(False)) is True aaronspring 12237157 closed 0     9 2020-07-30T13:23:14Z 2020-08-05T14:25:55Z 2020-08-05T13:48:55Z CONTRIBUTOR      

What happened:

```python v=True bool(xr.DataArray(v)) # True bool(xr.DataArray(v).to_dataset(name='var')) # True

v=False bool(xr.DataArray(v)) # False

unexpected behaviour below

bool(xr.DataArray(v).to_dataset(name='var')) # True ```

What you expected to happen:

python bool(xr.DataArray(False).to_dataset(name='var')) # False

Maybe this is intentional and I dont understand why.

xr.version = '0.16.0'

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4290/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
611839345 MDU6SXNzdWU2MTE4MzkzNDU= 4025 Visualize task tree aaronspring 12237157 closed 0     3 2020-05-04T12:31:25Z 2020-05-08T09:10:08Z 2020-05-04T14:43:25Z CONTRIBUTOR      

While reading this excellent discussion on working with large onetimestep datasets https://discourse.pangeo.io/t/best-practices-to-go-from-1000s-of-netcdf-files-to-analyses-on-a-hpc-cluster/588/10 I asked myself again why we don’t have the task tree visualisation in xarray as we have in dask. Is there a technical reason that prevents us from implementing visualize?

This feature would be extremely useful for me.

Maybe it’s easier to do this for dataarrays first.

```python

ds = rasm Tutorial

ds = ds.chunk({“time”:2}) ds.visualize()

```

Expected Output

Figure of task tree

https://docs.dask.org/en/latest/graphviz.html

Problem Description

visualize the task tree only implemented in dask. Now I recreate my xr Problem in dask to circumvent. Nicer would be .visualize() in xarray.

https://discourse.pangeo.io/t/best-practices-to-go-from-1000s-of-netcdf-files-to-analyses-on-a-hpc-cluster/588/10

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4025/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
577088426 MDU6SXNzdWU1NzcwODg0MjY= 3843 Implement `skipna` in xr.quantile for speedup aaronspring 12237157 closed 0     1 2020-03-06T17:58:28Z 2020-03-08T17:42:43Z 2020-03-08T17:42:43Z CONTRIBUTOR      

xr.quantile uses np.nanquantile which is slower than np.quantile but only needed when ignoring nans is needed. Adding skipna as kwarg would lead to a speedup for many use-cases.

MCVE Code Sample

np.quantile is much faster than np.nanquantile ```python control = xr.DataArray(np.random.random((50,256,192)),dims=['time','x','y']) %time _ = control.quantile(dim='time',q=q) CPU times: user 4.14 s, sys: 61.4 ms, total: 4.2 s Wall time: 4.3 s

%time _ = np.quantile(control,q,axis=0) CPU times: user 47.1 ms, sys: 4.27 ms, total: 51.4 ms Wall time: 52.6 ms

%time _ = np.nanquantile(control,q,axis=0) CPU times: user 3.18 s, sys: 21.4 ms, total: 3.2 s Wall time: 3.22 s ```

Expected Output

faster xr.quantile: ``` %time _ = control.quantile(dim='time',q=q) CPU times: user 4.95 s, sys: 34.3 ms, total: 4.98 s Wall time: 5.88 s

%time _ = control.quantile(dim='time',q=q, skipna=False) CPU times: user 85.3 ms, sys: 16.7 ms, total: 102 ms Wall time: 127 ms

```

Problem Description

np.nanquantile not always needed

Versions

Output of `xr.show_versions()` xr=0.15.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3843/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
433833707 MDU6SXNzdWU0MzM4MzM3MDc= 2900 open_mfdataset with proprocess ds[var] aaronspring 12237157 closed 0     3 2019-04-16T15:07:36Z 2019-04-16T19:09:34Z 2019-04-16T19:09:34Z CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

I would like to load only one variable from larger files containing 10s of variables. The files get really large when I open them. I expect them to be opened lazily also fast if I only want to extract one variable (maybe this is my misunderstand here).

I hoped to use preprocess, but I don't get it working.

Here my minimum example with 3 files of 12 timesteps each and two variable, but I only want to load one:

```python ds = xr.open_mfdataset(path) ds <xarray.Dataset> Dimensions: (depth: 1, depth_2: 1, time: 36, x: 2, y: 2) Coordinates: * depth (depth) float64 0.0 lon (y, x) float64 -48.11 -47.43 -48.21 -47.52 lat (y, x) float64 56.52 56.47 56.14 56.09 * depth_2 (depth_2) float64 90.0 * time (time) datetime64[ns] 1850-01-31T23:15:00 ... 1852-12-31T23:15:00 Dimensions without coordinates: x, y Data variables: co2flux (time, depth, y, x) float32 dask.array<shape=(36, 1, 2, 2), chunksize=(12, 1, 2, 2)> caex90 (time, depth_2, y, x) float32 dask.array<shape=(36, 1, 2, 2), chunksize=(12, 1, 2, 2)>

def preprocess(ds,var='co2flux'): return ds[var]

ds = xr.open_mfdataset(path,preprocess=preprocess)

ValueError Traceback (most recent call last) <ipython-input-17-770267b86462> in <module> 1 def preprocess(ds,var='co2flux'): 2 return ds[var] ----> 3 ds = xr.open_mfdataset(path,preprocess=preprocess)

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/backends/api.py in open_mfdataset(paths, chunks, concat_dim, compat, preprocess, engine, lock, data_vars, coords, autoclose, parallel, **kwargs) 717 data_vars=data_vars, coords=coords, 718 infer_order_from_coords=infer_order_from_coords, --> 719 ids=ids) 720 except ValueError: 721 for ds in datasets:

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/core/combine.py in _auto_combine(datasets, concat_dims, compat, data_vars, coords, infer_order_from_coords, ids) 551 # Repeatedly concatenate then merge along each dimension 552 combined = _combine_nd(combined_ids, concat_dims, compat=compat, --> 553 data_vars=data_vars, coords=coords) 554 return combined 555

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/core/combine.py in _combine_nd(combined_ids, concat_dims, data_vars, coords, compat) 473 data_vars=data_vars, 474 coords=coords, --> 475 compat=compat) 476 combined_ds = list(combined_ids.values())[0] 477 return combined_ds

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/core/combine.py in _auto_combine_all_along_first_dim(combined_ids, dim, data_vars, coords, compat) 491 datasets = combined_ids.values() 492 new_combined_ids[new_id] = _auto_combine_1d(datasets, dim, compat, --> 493 data_vars, coords) 494 return new_combined_ids 495

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/core/combine.py in _auto_combine_1d(datasets, concat_dim, compat, data_vars, coords) 505 if concat_dim is not None: 506 dim = None if concat_dim is _CONCAT_DIM_DEFAULT else concat_dim --> 507 sorted_datasets = sorted(datasets, key=vars_as_keys) 508 grouped_by_vars = itertools.groupby(sorted_datasets, key=vars_as_keys) 509 concatenated = [_auto_concat(list(ds_group), dim=dim,

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/core/combine.py in vars_as_keys(ds) 496 497 def vars_as_keys(ds): --> 498 return tuple(sorted(ds)) 499 500

/work/mh0727/m300524/anaconda3/envs/my_jupyter/lib/python3.6/site-packages/xarray/core/common.py in bool(self) 80 81 def bool(self): ---> 82 return bool(self.values) 83 84 # Python 3 uses bool, Python 2 uses nonzero

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ```

I was hoping that data_vars could work like this but it has no effect. Probably I got the documentation wrong here. python ds = xr.open_mfdataset(path,data_vars=['co2flux']) ds <xarray.Dataset> Dimensions: (depth: 1, depth_2: 1, time: 36, x: 2, y: 2) Coordinates: * depth (depth) float64 0.0 lon (y, x) float64 -48.11 -47.43 -48.21 -47.52 lat (y, x) float64 56.52 56.47 56.14 56.09 * depth_2 (depth_2) float64 90.0 * time (time) datetime64[ns] 1850-01-31T23:15:00 ... 1852-12-31T23:15:00 Dimensions without coordinates: x, y Data variables: co2flux (time, depth, y, x) float32 dask.array<shape=(36, 1, 2, 2), chunksize=(12, 1, 2, 2)> caex90 (time, depth_2, y, x) float32 dask.array<shape=(36, 1, 2, 2), chunksize=(12, 1, 2, 2)>

Problem description

I would expect from the documentation the below behaviour.

Expected Output

```python ds = xr.open_mfdataset(path,data_vars=['co2flux']) ds <xarray.Dataset> Dimensions: (depth: 1, depth_2: 1, time: 36, x: 2, y: 2) Coordinates: * depth (depth) float64 0.0 lon (y, x) float64 -48.11 -47.43 -48.21 -47.52 lat (y, x) float64 56.52 56.47 56.14 56.09 * depth_2 (depth_2) float64 90.0 * time (time) datetime64[ns] 1850-01-31T23:15:00 ... 1852-12-31T23:15:00 Dimensions without coordinates: x, y Data variables: co2flux (time, depth, y, x) float32 dask.array<shape=(36, 1, 2, 2), chunksize=(12, 1, 2, 2)>

ds = xr.open_mfdataset(path,preprocess=preprocess) ds <xarray.Dataset> Dimensions: (depth: 1, depth_2: 1, time: 36, x: 2, y: 2) Coordinates: * depth (depth) float64 0.0 lon (y, x) float64 -48.11 -47.43 -48.21 -47.52 lat (y, x) float64 56.52 56.47 56.14 56.09 * depth_2 (depth_2) float64 90.0 * time (time) datetime64[ns] 1850-01-31T23:15:00 ... 1852-12-31T23:15:00 Dimensions without coordinates: x, y Data variables: co2flux (time, depth, y, x) float32 dask.array<shape=(36, 1, 2, 2), chunksize=(12, 1, 2, 2)>

```

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 2.6.32-696.18.7.el6.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.1 pandas: 0.24.2 numpy: 1.14.2 scipy: 1.2.1 netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.2.0 cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.2.0 distributed: 1.27.0 matplotlib: 3.0.3 cartopy: 0.17.0 seaborn: 0.9.0 setuptools: 40.4.0 pip: 18.1 conda: None pytest: None IPython: 7.0.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2900/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 21.365ms · About: xarray-datasette