home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

9 rows where repo = 13221727, type = "issue" and user = 35919497 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 9 ✖

state 1

  • closed 9

repo 1

  • xarray · 9 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
324272267 MDU6SXNzdWUzMjQyNzIyNjc= 2157 groupby should not squeeze out dimensions aurghs 35919497 closed 0     1 2018-05-18T05:10:57Z 2024-01-08T01:05:24Z 2024-01-08T01:05:24Z COLLABORATOR      

Code Sample

```python arr = xr.DataArray( np.ones(3), dims=('x',), coords={ 'x': ('x', np.array([1, 3, 6])), } ) list(arr.groupby('x'))

[(1, <xarray.DataArray ()> array(1.) Coordinates: x int64 1), (3, <xarray.DataArray ()> array(1.) Coordinates: x int64 3), (6, <xarray.DataArray ()> array(1.) Coordinates: x int64 6)] ```

Problem description

The dimension x disappear. I have done some tests and it seems that this problem raise only with strictly ascending coordinates. For example in this case it works correctly:

```python arr = xr.DataArray( np.ones(3), dims=('x',), coords={ 'x': ('x', np.array([2, 1, 0])), } ) list(arr.groupby('x'))

[(0, <xarray.DataArray (x: 1)> array([1.]) Coordinates: * x (x) int64 0), (1, <xarray.DataArray (x: 1)> array([1.]) Coordinates: * x (x) int64 1), (2, <xarray.DataArray (x: 1)> array([1.]) Coordinates: * x (x) int64 2)] ```

Expected Output

```python arr = xr.DataArray( np.ones(3), dims=('x',), coords={ 'x': ('x', np.array([1, 3, 6])), } ) list(arr.groupby('x'))

[(1, <xarray.DataArray (x: 1)> ar1ay([1.]) Coordinates: * x (x) int64 1), (3, <xarray.DataArray (x: 1)> array([1.]) Coordinates: * x (x) int64 3), (6, <xarray.DataArray (x: 1)> array([1.]) Coordinates: * x (x) int64 6)] ```

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.0.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-41-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.4 pandas: 0.22.0 numpy: 1.14.3 scipy: 1.1.0 netCDF4: 1.3.1 h5netcdf: None h5py: 2.7.1 Nio: None zarr: None bottleneck: None cyordereddict: None dask: 0.17.4 distributed: 1.21.8 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: None setuptools: 38.4.1 pip: 10.0.1 conda: None pytest: 3.5.1 IPython: 6.2.1 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2157/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
672912921 MDU6SXNzdWU2NzI5MTI5MjE= 4309 Flexible Backend - AbstractDataStore definition aurghs 35919497 closed 0     6 2020-08-04T16:14:16Z 2021-03-09T01:04:00Z 2021-03-09T01:04:00Z COLLABORATOR      

I just want to do a small recap of the current proposals for the class AbstractDataStore refactor discussed with @shoyer, @jhamman, and @alexamici.

Proposal 1: Store returns: - xr.Variables with the list of filters to apply to every variable - dataset attributes - encodings

Xarray applies to every variable only the filters selected by the backend before building the xr.Dataset.

Proposal 2: Store returns: - xr.Variables with all needed filters applied (configured by xarray), - dataset attributes - encodings

Xarray builds the xr.Dataset

Proposal 3: Store returns: - xr.Dataset

Before going on I'd like to collect pros and cons. For my understanding:

Proposal 1

pros: - the backend is free to decide which representation to provide. - more control on the backend (? not necessary true, the backend can decide to apply all the filters internally and provide xarray and empty list of filters to be applied) - enable / disable filters logic would be in xarray. - all the filters (applied by xarray) should have a similar interface. - maybe registered filters could be used by other backends

cons: - confusing backend-xarray interface. - more difficult to define interfaces. More conflicts (registered filters with the same name...) - need more structure to define this interface, more code to maintain.

Proposal 2

pros: - interface backend-xarray is clearer / backend and xarray have well different defined tasks. - interface would be minimal and easier to implement - no intermediate representations - less code to maintain

cons: - less control on filters. - more complex explicit definition of the interface (every filter must understand what decode_times means in their case) - more complexity inside the filters

The minimal interface would be something like that: py class AbstractBackEnd: def __init__(self, path, encode_times=True, ..., **kwargs): # signature of open_dataset raise NotImplementedError def get_variables(): """Return a dictionary of variable name and xr.Variable""" raise NotImplementedError def get_attrs(): """returns """ raise NotImplementedError def get_encoding(): """ """ raise NotImplementedError def close(self): pass

Proposal 3

pros w.r.t. porposal 2: - decode_coordinates is done by the backend as the other filters.

cons?

Any suggestions?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4309/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
785233324 MDU6SXNzdWU3ODUyMzMzMjQ= 4803 Update Documentation for backend Implementation aurghs 35919497 closed 0     1 2021-01-13T16:04:47Z 2021-03-08T20:58:02Z 2021-03-08T20:58:02Z COLLABORATOR      

The backend read-support refactor is drawing to a close and we should start to add the documentation to explain how to implement new backends.

We should: - decide where to put the documentation - decide a title - define a brief list of the main points to discuss in the documentation.

For the first point, I suggest putting the documentation in "Internal". For the second one, I suggest: "How to add a new backend"

Concerning the third point, in the following there is a list of the topics, that I suggest:: - BackendEntrypoint Description (BackendEntrypoint is the main interface with xarray, it's a container of functions to be implemented and attributes: guess_can_open, open_dataset, open_dataset_parameters, [guess_can_write], [dataset_writer]) - How to add the backend as an external entrypoint. - Description of the function contained in BackendEntrypoint to be implemented. In particular, for open_dataset we have two option to describe:
- No Lazy it returns a dataset containing numpy arrays. - Lazy it returns a dataset containing BackendArrays: - BackendArrays description: - thread-safe __getitem__ - Pickable (use CachingFileManager) - indexing.IndexingSupport

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4803/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
324032926 MDU6SXNzdWUzMjQwMzI5MjY= 2148 groupby beahaviour w.r.t. non principal coordinates aurghs 35919497 closed 0     4 2018-05-17T13:52:43Z 2020-12-17T11:47:47Z 2020-12-17T11:47:47Z COLLABORATOR      

Code Sample

```python import numpy as np import xarray as xr

arr = xr.DataArray( np.ones(5), dims=('x',), coords={ 'x': ('x', np.array([1, 1, 1, 2, 2])), 'x2': ('x', np.array([1, 2, 3, 4, 5])), } ) arr <xarray.DataArray (x: 5)> array([1., 1., 1., 1., 1.]) Coordinates: * x (x) int64 1 1 1 2 2 x2 (x) int64 1 2 3 4 5

out = arr.groupby('x').mean('x') out <xarray.DataArray (x: 2)> array([1., 1.]) Coordinates: * x (x) int64 1 2 x2 (x) int64 1 2 3 4 5 ```

Problem description

Inconsistency between: - the shape dimension x: (2,) - the shape of the coordinates x2 depending on the dimension x: (5,)

Expected Output

The coordinate x2 should be dropped. python <xarray.DataArray (x: 2)> array([1., 1.]) Coordinates: * x (x) int64 1 2

Output of xr.show_versions()

```python INSTALLED VERSIONS


commit: None python: 3.6.0.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-41-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8

xarray: 0.10.4 pandas: 0.22.0 numpy: 1.14.3 scipy: 1.1.0 netCDF4: 1.3.1 h5netcdf: None h5py: 2.7.1 Nio: None zarr: None bottleneck: None cyordereddict: None dask: 0.17.4 distributed: 1.21.8 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: None setuptools: 38.4.1 pip: 10.0.1 conda: None pytest: 3.5.1 IPython: 6.2.1 sphinx: 1.7.4 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2148/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
710071238 MDU6SXNzdWU3MTAwNzEyMzg= 4468 Backend read support: dynamic import in xarray namespace of backend open functions aurghs 35919497 closed 0     0 2020-09-28T08:47:09Z 2020-12-10T14:29:56Z 2020-12-10T14:29:56Z COLLABORATOR      

@jhamman, @shoyer @alexamici we discussed last time about the possibility to import directly in the xarray namespace the open function of the backends, open_dataset_${engine}. I just want to recap some pro and cons of this proposal:

Pros: - Expert users can use directly the open function of the backend (without using engine=) - They can use Tab key to autocomplete the backend kwargs. - They can easily access to the backend open function signature. (that's really useful!)

Cons: - The users they might expect in the namespace also the other corresponding functions: open_mfdataset_${engine}, open_datarray_${engine} etc ... and we are not going to do it because it is too confusing

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4468/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
717410970 MDU6SXNzdWU3MTc0MTA5NzA= 4496 Flexible backends - Harmonise zarr chunking with other backends chunking aurghs 35919497 closed 0 aurghs 35919497   7 2020-10-08T14:43:23Z 2020-12-10T10:51:09Z 2020-12-10T10:51:09Z COLLABORATOR      

Is your feature request related to a problem? Please describe. In #4309 we proposed to separate xarray - backend tasks, more or less in this way: - Backend returns a dataset - xarray manage chunks and cache.

With the changes in open_dataset to support also zarr (#4187 ), we introduced a slightly different behavior for zarr chunking with respect the other backends.

Behavior of all the backends except zar - if chunk == {} or 'auto': it uses dask and only one chunk per variable - if the user defines chunks for not all the dimensions, along these dimensions it uses only one chunk: ```python

ds = xr.open_dataset('test.nc', chunks={'x': 4}) print(ds['foo'].chunks) ((4, 4, 4, 4, 4), (4,)) *Zarr chunking behavior* is very similar, but it has a different default when the user doesn't choose the size of the chunk along some dimensions, i.e. - if chunk == {} or 'auto': it uses in both cases the on-disk chunks - if the user defines the chunks for not all the dimensions, along these dimensions it uses no disk chunck:python ds = xr.open_dataset('test.zarr', engine='zarr', chunks={'x': 4}) print(ds['foo'].encoding['chunks']) (5, 2) print(ds['foo'].chunks) ((4, 4, 4, 4, 4), (2, 2)) ```

Describe the solution you'd like

We could extend easily zarr behavior to all the backends (which, for now, don't use the field variable.encodings['chunks']): if no chunks are defined in encoding, we use as default the dimension size, otherwise, we use the encoded chunks. So for now we are not going to change any external behavior, but if needed the other backends can use this interface. I have some additional notes:

  • The key value auto is redundant because it has the same behavior as {}, we could remove one of them.
  • I would separate the concepts "on disk chunk" and "preferred chunking". We can use a different key in encodings or ask the backend to expose a function to compute the preferred chunking.

One last question: - In the new interface of open_dataset there is a new key, imported from open_zarr: overwrite_encoded_chunks. Is it really needed? Why do we support to overwrite of the encoded chunks at readi time? This operation can be easily done after or at write time.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4496/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
327392061 MDU6SXNzdWUzMjczOTIwNjE= 2196 inconsistent time coordinates types aurghs 35919497 closed 0     1 2018-05-29T16:14:27Z 2020-03-29T14:09:26Z 2020-03-29T14:09:26Z COLLABORATOR      

Code Sample, a copy-pastable example if possible

```python import numpy as np import pandas as pd import xarray as xr

time = np.arange('2005-02-01', '2007-03-01', dtype='datetime64') arr = xr.DataArray( np.arange(time.size), coords=[time,], dims=('time',), name='data' ) arr.resample(time='M').interpolate('linear')


ValueError Traceback (most recent call last) <ipython-input-7-6a92b6afe08e> in <module>() 7 np.arange(time.size), coords=[time,], dims=('time',), name='data' 8 ) ----> 9 arr.resample(time='M').interpolate('linear')

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/xarray/core/resample.py in interpolate(self, kind) 108 109 """ --> 110 return self._interpolate(kind=kind) 111 112 def _interpolate(self, kind='linear'):

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/xarray/core/resample.py in _interpolate(self, kind) 218 elif self._dim not in v.dims: 219 coords[k] = v --> 220 return DataArray(f(new_x), coords, dims, name=dummy.name, 221 attrs=dummy.attrs) 222

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/scipy/interpolate/polyint.py in call(self, x) 77 """ 78 x, x_shape = self._prepare_x(x) ---> 79 y = self._evaluate(x) 80 return self._finish_y(y, x_shape) 81

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/scipy/interpolate/interpolate.py in _evaluate(self, x_new) 632 y_new = self._call(self, x_new) 633 if not self._extrapolate: --> 634 below_bounds, above_bounds = self._check_bounds(x_new) 635 if len(y_new) > 0: 636 # Note fill_value must be broadcast up to the proper size

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/scipy/interpolate/interpolate.py in _check_bounds(self, x_new) 664 "range.") 665 if self.bounds_error and above_bounds.any(): --> 666 raise ValueError("A value in x_new is above the interpolation " 667 "range.") 668

ValueError: A value in x_new is above the interpolation range. ```

Problem description

The internal format of arr.time is datetime64[D]

```python arr.time

<xarray.DataArray 'time' (time: 758)> array(['2005-02-01', '2005-02-02', '2005-02-03', ..., '2007-02-26', '2007-02-27', '2007-02-28'], dtype='datetime64[D]') Coordinates: * time (time) datetime64[D] 2005-02-01 2005-02-02 2005-02-03 ... ``` Internally there is a cast to float, for both the old time indices x and the new time indices new_x, but the new time indices are in datetime64[ns], so they don't match.

DataArrayResample._interpolate

```python x = self._obj[self._dim].astype('float') y = self._obj.data

   axis = self._obj.get_axis_num(self._dim)

   f = interp1d(x, y, kind=kind, axis=axis, bounds_error=True,
                assume_sorted=True)
   new_x = self._full_index.values.astype('float')

``` With a cast to datetime64[ns] it works:

```python import numpy as np import pandas as pd import xarray as xr

time = np.arange('2005-02-01', '2007-03-01', dtype='datetime64').astype('datetime64[ns]') arr = xr.DataArray( np.arange(time.size), coords=[time,], dims=('time',), name='data' ) arr.resample(time='M').interpolate('linear') <xarray.DataArray 'data' (time: 25)> array([ 27., 58., 88., 119., 149., 180., 211., 241., 272., 302., 333., 364., 392., 423., 453., 484., 514., 545., 576., 606., 637., 667., 698., 729., 757.]) Coordinates: * time (time) datetime64[ns] 2005-02-28 2005-03-31 2005-04-30 ... ```

Expected Output

python <xarray.DataArray 'data' (time: 25)> array([ 27., 58., 88., 119., 149., 180., 211., 241., 272., 302., 333., 364., 392., 423., 453., 484., 514., 545., 576., 606., 637., 667., 698., 729., 757.]) Coordinates: * time (time) datetime64[ns] 2005-02-28 2005-03-31 2005-04-30 ...

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.0.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-43-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 xarray: 0.10.4 pandas: 0.20.3 numpy: 1.13.1 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: None h5py: None Nio: None zarr: None bottleneck: None cyordereddict: None dask: 0.16.1 distributed: None matplotlib: 2.0.2 cartopy: None seaborn: None setuptools: 38.4.0 pip: 10.0.1 conda: None pytest: 3.4.0 IPython: 6.1.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2196/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
327591169 MDU6SXNzdWUzMjc1OTExNjk= 2197 DataArrayResample.interpolate coordinates out of bound. aurghs 35919497 closed 0     2 2018-05-30T06:33:58Z 2019-01-03T01:18:06Z 2019-01-03T01:18:06Z COLLABORATOR      

Code Sample, a copy-pastable example if possible

```python import numpy as np import pandas as pd import xarray as xr

time = np.arange('2007-02-01', '2007-03-02', dtype='datetime64').astype('datetime64[ns]') arr = xr.DataArray( np.arange(time.size), coords=[time,], dims=('time',), name='data' ) arr.resample(time='M').interpolate('linear')


ValueError Traceback (most recent call last) <ipython-input-20-ff65c4d138e7> in <module>() 7 np.arange(time.size), coords=[time,], dims=('time',), name='data' 8 ) ----> 9 arr.resample(time='M').interpolate('linear')

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/xarray/core/resample.py in interpolate(self, kind) 108 109 """ --> 110 return self._interpolate(kind=kind) 111 112 def _interpolate(self, kind='linear'):

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/xarray/core/resample.py in _interpolate(self, kind) 218 elif self._dim not in v.dims: 219 coords[k] = v --> 220 return DataArray(f(new_x), coords, dims, name=dummy.name, 221 attrs=dummy.attrs) 222

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/scipy/interpolate/polyint.py in call(self, x) 77 """ 78 x, x_shape = self._prepare_x(x) ---> 79 y = self._evaluate(x) 80 return self._finish_y(y, x_shape) 81

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/scipy/interpolate/interpolate.py in _evaluate(self, x_new) 632 y_new = self._call(self, x_new) 633 if not self._extrapolate: --> 634 below_bounds, above_bounds = self._check_bounds(x_new) 635 if len(y_new) > 0: 636 # Note fill_value must be broadcast up to the proper size

~/devel/c3s-cns/venv_op/lib/python3.6/site-packages/scipy/interpolate/interpolate.py in _check_bounds(self, x_new) 664 "range.") 665 if self.bounds_error and above_bounds.any(): --> 666 raise ValueError("A value in x_new is above the interpolation " 667 "range.") 668

ValueError: A value in x_new is above the interpolation range.

```

Problem description

It raise an error if I try to interpolate. If time range is exactly a month, then it works: ```python time = np.arange('2007-02-01', '2007-03-01', dtype='datetime64').astype('datetime64[ns]') arr = xr.DataArray( np.arange(time.size), coords=[time,], dims=('time',), name='data' ) arr.resample(time='M').interpolate('linear')

<xarray.DataArray 'data' (time: 1)> array([27.]) Coordinates: * time (time) datetime64[ns] 2007-02-28 ```

The problem for the interpolation seems to be that the resampler contains indices out bound ('2007-03-31'). It is ok for the aggregations, but it doesn't work with the interpolation.

```python resampler = arr.resample(time='M')

resampler._full_index DatetimeIndex(['2007-02-28', '2007-03-31'], dtype='datetime64[ns]', name='time', freq='M') ```

Expected Output

python <xarray.DataArray 'data' (time: 1)> array([27.]) Coordinates: * time (time) datetime64[ns] 2007-02-28

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.0.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-43-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 xarray: 0.10.3 pandas: 0.22.0 numpy: 1.14.3 scipy: 1.1.0 netCDF4: 1.3.1 h5netcdf: None h5py: None Nio: None zarr: None bottleneck: None cyordereddict: None dask: 0.17.4 distributed: None matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: None setuptools: 39.2.0 pip: 10.0.1 conda: None pytest: 3.5.1 IPython: 6.4.0 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2197/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
324121124 MDU6SXNzdWUzMjQxMjExMjQ= 2153 Bug: side effect on method GroupBy.first aurghs 35919497 closed 0     1 2018-05-17T17:43:25Z 2018-05-29T03:15:08Z 2018-05-29T03:15:08Z COLLABORATOR      

Code Sample, a copy-pastable example if possible

```python arr = xr.DataArray( np.arange(5), dims=('x',), coords={ 'x': ('x', np.array([1, 1, 1, 2, 2])), } )

gr = arr.groupby('x') gr.first()

arr

<xarray.DataArray (x: 5)> array([0, 1, 2, 3, 4]) Coordinates: * x (x) int64 1 2

```

Problem description

A side effect of the GroupBy.first method call is that it substitutes the original array coordinates with the grouped ones .

Expected Output

arr

<xarray.DataArray (x: 5)> array([0, 1, 2, 3, 4]) Coordinates: * x (x) int64 1 1 1 2 2

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.0.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-41-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.4 pandas: 0.22.0 numpy: 1.14.3 scipy: 1.1.0 netCDF4: 1.3.1 h5netcdf: None h5py: 2.7.1 Nio: None zarr: None bottleneck: None cyordereddict: None dask: 0.17.4 distributed: 1.21.8 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: None setuptools: 38.4.1 pip: 10.0.1 conda: None pytest: 3.5.1 IPython: 6.2.1 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2153/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 75.49ms · About: xarray-datasette