issues
16 rows where repo = 13221727 and "updated_at" is on date 2021-07-08 sorted by updated_at descending
This data as json, CSV (advanced)
These facets timed out: type
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
714905042 | MDU6SXNzdWU3MTQ5MDUwNDI= | 4486 | Feature request: xr.concat: `stack` parameter | FRidh 2129135 | open | 0 | 1 | 2020-10-05T14:43:40Z | 2021-07-08T17:44:38Z | NONE | Is your feature request related to a problem? Please describe.
In the case of dependent dimensions, there is a lot of missing data, and using a stacked layout is preferable. Composing an array using I am now composing an array using Describe the solution you'd like
A Initially it may just do the naive Describe alternatives you've considered
Composing an array using Additional context
Issue related to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
788534915 | MDU6SXNzdWU3ODg1MzQ5MTU= | 4824 | combine_by_coords can succed when it shouldn't | mathause 10194086 | open | 0 | 15 | 2021-01-18T20:39:29Z | 2021-07-08T17:44:38Z | MEMBER | What happened:
What you expected to happen:
Minimal Complete Verifiable Example: ```python import numpy as np import xarray as xr data = np.arange(5).reshape(1, 5) x = np.arange(5) x_name = "lat" da0 = xr.DataArray(data, dims=("t", x_name), coords={"t": [1], x_name: x}).to_dataset(name="a") x = x + 1e-6 da1 = xr.DataArray(data, dims=("t", x_name), coords={"t": [2], x_name: x}).to_dataset(name="a") ds = xr.combine_by_coords((da0, da1)) ds ``` returns:
```python-traceback ValueError: Resulting object does not have monotonic global indexes along dimension x ``` Anything else we need to know?:
cc @dcherian @TomNicholas Environment: Output of <tt>xr.show_versions()</tt> |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
546303413 | MDU6SXNzdWU1NDYzMDM0MTM= | 3666 | Raise nice error when attempting to concatenate CFTimeIndex & DatetimeIndex | tlogan2000 22454970 | open | 0 | 9 | 2020-01-07T14:08:03Z | 2021-07-08T17:43:58Z | NONE | MCVE Code Sample```python import subprocess import sys import wget import glob def install(package): subprocess.check_call([sys.executable, "-m", "pip", "install", package]) try: from xclim import ensembles except: install('xclim') from xclim import ensembles outdir = 'tmp' url = [] url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_ACCESS1-0_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_BNU-ESM_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r1i1p1_1950-2100_tg_mean_YS.nc') url.append('https://github.com/Ouranosinc/xclim/raw/master/tests/testdata/EnsembleStats/BCCAQv2+ANUSPLIN300_CCSM4_historical+rcp45_r2i1p1_1950-2100_tg_mean_YS.nc') for u in url: wget.download(u,out=outdir) datasets = glob.glob(f'{outdir}/1950.nc') ens1 = ensembles.create_ensemble(datasets) print(ens1) ``` Expected OutputFollowing advice of @dcherian (https://github.com/Ouranosinc/xclim/issues/281#issue-508073942) we have started testing builds of Using xarray 0.14.1 via pip the above code generates a concatenated dataset with new added dimension 'realization' Problem Descriptionusing xarray@master the
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
503711327 | MDU6SXNzdWU1MDM3MTEzMjc= | 3381 | concat() fails when args have sparse.COO data and different fill values | khaeru 1634164 | open | 0 | 4 | 2019-10-07T21:54:06Z | 2021-07-08T17:43:57Z | NONE | MCVE Code Sample```python import numpy as np import pandas as pd import sparse import xarray as xr Indices and raw datafoo = [f'foo{i}' for i in range(6)] bar = [f'bar{i}' for i in range(6)] raw = np.random.rand(len(foo) // 2, len(bar)) DataArraya = xr.DataArray( data=sparse.COO.from_numpy(raw), coords=[foo[:3], bar], dims=['foo', 'bar']) print(a.data.fill_value) # 0.0 Created from a pd.Seriesb_series = pd.DataFrame(raw, index=foo[3:], columns=bar) \ .stack() \ .rename_axis(index=['foo', 'bar']) b = xr.DataArray.from_series(b_series, sparse=True) print(b.data.fill_value) # nan Works despite inconsistent fill-valuesa + b a * b Fails: complains about inconsistent fill-valuesxr.concat([a, b], dim='foo') # ***The fill_value argument doesn't helpxr.concat([a, b], dim='foo', fill_value=np.nan)def fill_value(da): """Try to coerce one argument to a consistent fill-value.""" return xr.DataArray( data=sparse.as_coo(da.data, fill_value=np.nan), coords=da.coords, dims=da.dims, name=da.name, attrs=da.attrs, ) Fails: "Cannot provide a fill-value in combination with something thatalready has a fill-value"print(xr.concat([a.pipe(fill_value), b], dim='foo'))If we cheat by recreating 'a' from scratch, copying the fill value of theintended other argument, it works again:a = xr.DataArray( data=sparse.COO.from_numpy(raw, fill_value=b.data.fill_value), coords=[foo[:3], bar], dims=['foo', 'bar']) c = xr.concat([a, b], dim='foo') print(c.data.fill_value) # nan But simple operations again create objects with potentially incompatiblefill-valuesd = c.sum(dim='bar') print(d.data.fill_value) # 0.0 ``` Expected
Problem DescriptionSome basic xarray manipulations don't work on xarray should automatically coerce objects into a compatible state, or at least provide users with methods to do so. Behaviour should also be documented, e.g. in this instance, which operations (here, Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
512205079 | MDU6SXNzdWU1MTIyMDUwNzk= | 3445 | Merge fails when sparse Dataset has overlapping dimension values | k-a-mendoza 4605410 | open | 0 | 3 | 2019-10-24T22:08:12Z | 2021-07-08T17:43:57Z | NONE | Sparse numpy arrays used in a merge operation seem to fail under certain coordinate settings. for example, this works perfectly: ```python import xarray as xr import numpy as np data_array1 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.1'], 'receiver':['X.2'], 'time':time}).to_dataset() data_array2 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.2'], 'receiver':['X.1'], 'time':time}).to_dataset() dataset1 = xr.merge([data_array1,data_array2]) ``` But this raises an ```python import xarray as xr import numpy as np import sparse data = sparse.COO.from_numpy(np.random.uniform(-1,1,(1,1,100))) time = np.linspace(0,1,num=100) data_array1 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.1'], 'receiver':['X.2'], 'time':time}).to_dataset() data_array2 = xr.DataArray(data,name='default', dims=['source','receiver','time'], coords={'source':['X.2'], 'receiver':['X.1'], 'time':time}).to_dataset() dataset1 = xr.merge([data_array1,data_array2]) ``` I have noticed this occurs when the merger would seem to add dimensions filled with nan values. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
544375718 | MDU6SXNzdWU1NDQzNzU3MTg= | 3659 | Error concatenating Multiindex variables | hazbottles 14136435 | open | 0 | 1 | 2020-01-01T16:36:26Z | 2021-07-08T17:43:57Z | CONTRIBUTOR | MCVE Code Sample```python
Expected OutputThe output should be the same as first concatenating the DataArrays, then extracting the dimension location: ```python
Problem Description```python
This is why an error is thrown: Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
489825483 | MDU6SXNzdWU0ODk4MjU0ODM= | 3281 | [proposal] concatenate by axis, ignore dimension names | Hoeze 1200058 | open | 0 | 4 | 2019-09-05T15:06:22Z | 2021-07-08T17:42:53Z | NONE | Hi, I wrote a helper function which allows to concatenate arrays like I often need this to combine very different feature types. ```python from typing import Union, Tuple, List import numpy as np import xarray as xr def concat_by_axis(
darrs: Union[List[xr.DataArray], Tuple[xr.DataArray]],
dims: Union[List[str], Tuple[str]],
axis: int = None,
**kwargs
):
"""
Concat arrays along some axis similar to
``` Would it make sense to include this in xarray? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
494906646 | MDU6SXNzdWU0OTQ5MDY2NDY= | 3315 | xr.combine_nested() fails when passed nested DataSets | friedrichknuth 10554254 | open | 0 | 8 | 2019-09-17T23:47:44Z | 2021-07-08T17:42:53Z | NONE |
xr.combine_nested() works when passed a nested list of DataArray objects.
```KeyError Traceback (most recent call last) <ipython-input-8-c0035883fc68> in <module> 3 ds3 = da3.to_dataset() 4 ds4 = da4.to_dataset() ----> 5 xr.combine_nested([[ds1, ds2], [ds3, ds4]], concat_dim=["x", "y"]) ~/repos/contribute/xarray/xarray/core/combine.py in combine_nested(datasets, concat_dim, compat, data_vars, coords, fill_value, join) 462 ids=False, 463 fill_value=fill_value, --> 464 join=join, 465 ) 466 ~/repos/contribute/xarray/xarray/core/combine.py in _nested_combine(datasets, concat_dims, compat, data_vars, coords, ids, fill_value, join) 305 coords=coords, 306 fill_value=fill_value, --> 307 join=join, 308 ) 309 return combined ~/repos/contribute/xarray/xarray/core/combine.py in _combine_nd(combined_ids, concat_dims, data_vars, coords, compat, fill_value, join) 196 compat=compat, 197 fill_value=fill_value, --> 198 join=join, 199 ) 200 (combined_ds,) = combined_ids.values() ~/repos/contribute/xarray/xarray/core/combine.py in _combine_all_along_first_dim(combined_ids, dim, data_vars, coords, compat, fill_value, join) 218 datasets = combined_ids.values() 219 new_combined_ids[new_id] = _combine_1d( --> 220 datasets, dim, compat, data_vars, coords, fill_value, join 221 ) 222 return new_combined_ids ~/repos/contribute/xarray/xarray/core/combine.py in _combine_1d(datasets, concat_dim, compat, data_vars, coords, fill_value, join) 246 compat=compat, 247 fill_value=fill_value, --> 248 join=join, 249 ) 250 except ValueError as err: ~/repos/contribute/xarray/xarray/core/concat.py in concat(objs, dim, data_vars, coords, compat, positions, fill_value, join) 131 "objects, got %s" % type(first_obj) 132 ) --> 133 return f(objs, dim, data_vars, coords, compat, positions, fill_value, join) 134 135 ~/repos/contribute/xarray/xarray/core/concat.py in _dataset_concat(datasets, dim, data_vars, coords, compat, positions, fill_value, join) 363 for k in datasets[0].variables: 364 if k in concat_over: --> 365 vars = ensure_common_dims([ds.variables[k] for ds in datasets]) 366 combined = concat_vars(vars, dim, positions) 367 assert isinstance(combined, Variable) ~/repos/contribute/xarray/xarray/core/concat.py in <listcomp>(.0) 363 for k in datasets[0].variables: 364 if k in concat_over: --> 365 vars = ensure_common_dims([ds.variables[k] for ds in datasets]) 366 combined = concat_vars(vars, dim, positions) 367 assert isinstance(combined, Variable) ~/repos/contribute/xarray/xarray/core/utils.py in getitem(self, key) 383 384 def getitem(self, key: K) -> V: --> 385 return self.mapping[key] 386 387 def iter(self) -> Iterator[K]: KeyError: 'a' ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
329575874 | MDU6SXNzdWUzMjk1NzU4NzQ= | 2217 | tolerance for alignment | naomi-henderson 31460695 | open | 0 | 23 | 2018-06-05T18:34:45Z | 2021-07-08T17:42:52Z | NONE | When using open_mfdataset on files which 'should' share a grid, there is often a small mismatch which results in the grid not aligning properly. This happens frequently when trying to read data from large climate models from multiple files of the same variable, same lon,lat grid and different time intervals. This silent behavior means that I always have to check the sizes of the lon,lat grids whenever I rely on mfdataset to concatenate the data in time. Here is an example in which I create two 1d DataArrays which have slightly different coordinates: ```python import xarray as xr import numpy as np from glob import glob tol=1e-14 x1 = np.arange(1,6)+ tol*np.random.rand(5) da1 = xr.DataArray([9, 0, 2, 1, 0], dims=['x'], coords={'x': x1}) x2 = np.arange(1,6) + tol*np.random.rand(5) da2 = da1.copy() da2['x'] = x2 print(da1.x,'\n', da2.x)
db = xr.open_mfdataset(glob('da?.nc')) db
Request/ suggestionWhat is needed is a user specified tolerance level to give to open_mfdataset and passed to align which will accept these grids as the same Possibly related to https://github.com/pydata/xarray/issues/2215
xr.__version__
'0.10.4'
thanks, Naomi |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2217/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
415802678 | MDU6SXNzdWU0MTU4MDI2Nzg= | 2796 | Better explanation of 'minimal' in xarray.open_mfdataset(data_vars='minimal') in docs? | wckoeppen 5704500 | open | 0 | 2 | 2019-02-28T20:11:42Z | 2021-07-08T17:42:52Z | NONE | Problem descriptionI'm currently troubleshooting some overly long (to me) load times using open_mfdataset on GFS data. In trying to speed things up, I'm trying to specify just the four variables I actually care about using In the docs I do see that if
However, I can't seem to understand what the 'minimal' variables are from this sentence in the docs:
All the variables in the CF-compliant GFS data are associated with dimensions. So does that mean that all the variables in the files will be concatenated, regardless if I specify which ones I want? I feel like I'm misunderstanding what is included by default. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
423749397 | MDU6SXNzdWU0MjM3NDkzOTc= | 2836 | xarray.concat() with compat='identical' fails for DataArray attrs | aldanor 2418513 | open | 0 | 9 | 2019-03-21T14:11:29Z | 2021-07-08T17:42:52Z | NONE | Not sure if it was ever supposed to work with numpy arrays, but it actually does :thinking:: ```python
However, it fails if you use DataArray attrs: ```python
Given that the check is simply |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
446054247 | MDU6SXNzdWU0NDYwNTQyNDc= | 2975 | Inconsistent/confusing behaviour when concatenating dimension coords | TomNicholas 35968931 | open | 0 | 2 | 2019-05-20T11:01:37Z | 2021-07-08T17:42:52Z | MEMBER | I noticed that with multiple conflicting dimension coords then concat can give pretty weird/counterintuitive results, at least compared to what the documentation suggests they should give: ```python Create two datasets with conflicting coordinatesobjs = [Dataset({'x': [0], 'y': [1]}), Dataset({'y': [0], 'x': [1]})] [<xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: * x (x) int64 0 * y (y) int64 1 Data variables: empty, <xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: * y (y) int64 0 * x (x) int64 1 Data variables: empty] ``` ```python Try to join along only 'x',coords='minimal' so concatenate "Only coordinates in which the dimension already appears"concat(objs, dim='x', coords='minimal') <xarray.Dataset> Dimensions: (x: 2, y: 2) Coordinates: * y (y) int64 0 1 * x (x) int64 0 1 Data variables: empty It's joined along x and y!``` Based on my reading of the docstring for concat, I would have expected this to not attempt to concatenate y, because Now let's try to get concat to broadcast 'y' across 'x': ```python Try to join along only 'x' by setting coords='different'concat(objs, dim='x', coords='different') ``` Now as "Data variables which are not equal (ignoring attributes) across all datasets are also concatenated" then I would have expected 'y' to be concatenated across 'x', i.e. to add the 'x' dimension to the 'y' coord, i.e:
Same again but without dimension coordsIf we create the same sort of objects but the variables are data vars not coords, then everything behaves exactly as expected: ```python objs2 = [Dataset({'a': ('x', [0]), 'b': ('y', [1])}), Dataset({'a': ('x', [1]), 'b': ('y', [0])})] [<xarray.Dataset> Dimensions: (x: 1, y: 1) Dimensions without coordinates: x, y Data variables: a (x) int64 0 b (y) int64 1, <xarray.Dataset> Dimensions: (x: 1, y: 1) Dimensions without coordinates: x, y Data variables: a (x) int64 1 b (y) int64 0] concat(objs2, dim='x', data_vars='minimal') ValueError: variable b not equal across datasets concat(objs2, dim='x', data_vars='different') <xarray.Dataset> Dimensions: (x: 2, y: 1) Dimensions without coordinates: x, y Data variables: a (x) int64 0 1 b (x, y) int64 1 0 ``` Also if you do the same again but with coordinates which are not dimension coords, i.e: ```python objs3 = [Dataset(coords={'a': ('x', [0]), 'b': ('y', [1])}), Dataset(coords={'a': ('x', [1]), 'b': ('y', [0])})] [<xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: a (x) int64 0 b (y) int64 1 Dimensions without coordinates: x, y Data variables: empty, <xarray.Dataset> Dimensions: (x: 1, y: 1) Coordinates: a (x) int64 1 b (y) int64 0 Dimensions without coordinates: x, y Data variables: empty] ``` then this again gives the expected concatenation behaviour. So this implies that the compatibility checks that are being done on the data vars are not being done on the coords, but only if they are dimension coordinates! Either this is not the desired behaviour or the concat docstring needs to be a lot clearer. If we agree that this is not the desired behaviour then I will have a look inside EDIT: Presumably this has something to do with the ToDo in the code for |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
193294569 | MDU6SXNzdWUxOTMyOTQ1Njk= | 1151 | Scalar coords vs. concat | crusaderky 6213168 | open | 0 | 11 | 2016-12-03T15:42:18Z | 2021-07-08T17:42:18Z | MEMBER | Why does this work: ```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
223231729 | MDU6SXNzdWUyMjMyMzE3Mjk= | 1379 | xr.concat consuming too much resources | rafa-guedes 7799184 | open | 0 | 4 | 2017-04-20T23:33:52Z | 2021-07-08T17:42:18Z | CONTRIBUTOR | Hi, I am reading in several (~1000) small ascii files into Dataset objects and trying to concatenate them over one specific dimension but I eventually blow my memory up. The file glob is not huge (~700M, my computer has ~16G) and I can do it fine if I only read in the Datasets appending them to a list without concatenating them (my memory increases by 5% only or so by the time I had read them all). However, when trying to concatenate each file into one single Dataset upon reading over a loop, the processing speeds drastically reduce before I have read 10% of the files or so and my memory usage keeps going up until it eventually blows up before I read and concatenate 30% of these files (the screenshot below was taken before it blew up, the memory usage was under 20% by the start of the processing). I was wondering if this is expected, or if there something that could be improved to make that work more efficiently please. I'm changing my approach now by extracting numpy arrays from the individual Datasets, concatenating these numpy arrays and defining the final Dataset only at the end. Thanks. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1379/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
471409673 | MDU6SXNzdWU0NzE0MDk2NzM= | 3158 | Out of date docstring for concat_dim in open_mfdataset | zdgriffith 17169544 | open | 0 | 3 | 2019-07-23T00:01:05Z | 2021-07-08T17:40:45Z | CONTRIBUTOR | In the
This is true for the default |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
496688781 | MDU6SXNzdWU0OTY2ODg3ODE= | 3330 | Feature requests for DataArray.rolling | fjanoos 923438 | closed | 0 | 1 | 2019-09-21T18:58:21Z | 2021-07-08T16:29:18Z | 2021-07-08T16:29:18Z | NONE | In |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3330/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);