home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

14 rows where repo = 13221727 and user = 81219 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 7
  • pull 7

state 2

  • closed 12
  • open 2

repo 1

  • xarray · 14 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2180447578 PR_kwDOAMm_X85pUKwG 8821 Add small test exposing issue from #7794 and suggestion for `_wrap_numpy_scalars` fix huard 81219 open 0     1 2024-03-11T23:40:17Z 2024-04-03T18:53:28Z   CONTRIBUTOR   0 pydata/xarray/pulls/8821

_wrap_numpy_scalars relies on np.isscalar, which incorrectly labels a single cftime object as not a scalar.

```python import cftime import numpy as np

c = cftime.datetime(2000, 1, 1, calendar='360_day') np.isscalar(c) # False ```

The PR adds logic to handle non-numpy objects using the np.ndim function. The logic for built-ins and numpy objects should remain the same.

The function logic could possibly be rewritten more clearly as ```python

if hasattr(array, "dtype"):
    if np.isscalar(array):
        return np.array(array)
    else:
        return array

if np.ndim(array) == 0:
    return np.array(array)

return array

```

  • [x] Closes #7794
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8821/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
354535284 MDU6SXNzdWUzNTQ1MzUyODQ= 2385 Resample with default argument huard 81219 open 0     6 2018-08-28T01:24:54Z 2023-09-27T08:42:26Z   CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

```python time = pd.date_range('2000-01-01', freq='D', periods=365 * 3) ds = xr.Dataset({'foo': ('time', np.arange(365 * 3)), 'time': time}) ds.foo.resample(time=None)


TypeError Traceback (most recent call last) <ipython-input-34-d7109c181d10> in <module>() ----> 1 ds.foo.resample(time=None)

/home/david/src/anaconda3/lib/python2.7/site-packages/xarray/core/common.pyc in resample(self, freq, dim, how, skipna, closed, label, base, keep_attrs, **indexer) 678 "was passed %r" % dim) 679 group = DataArray(dim, [(dim.dims, dim)], name=RESAMPLE_DIM) --> 680 grouper = pd.Grouper(freq=freq, closed=closed, label=label, base=base) 681 resampler = self._resample_cls(self, group=group, dim=dim_name, 682 grouper=grouper,

TypeError: init() got an unexpected keyword argument 'base' ```

Problem description

Although None is the default value (0v.10.6) for freq, actually using None as the freq raises an error.

Expected Output

I would like resample(time=None) to return ds.foo itself, or a DataArrayResample instance that includes the entire array.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 2.7.14.final.0 python-bits: 64 OS: Linux OS-release: 4.15.0-30-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: None.None xarray: 0.10.6 pandas: 0.23.0 numpy: 1.14.3 scipy: 1.0.0 netCDF4: 1.3.1 h5netcdf: 0.5.1 h5py: 2.8.0 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: 1.0.0 dask: 0.17.5 distributed: 1.21.8 matplotlib: 2.2.2 cartopy: None seaborn: 0.8.1 setuptools: 39.2.0 pip: 9.0.3 conda: 4.4.11 pytest: 3.6.0 IPython: 5.7.0 sphinx: 1.7.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2385/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1461935127 I_kwDOAMm_X85XI1wX 7314 Scatter plot infers weird default values huard 81219 closed 0     2 2022-11-23T15:10:41Z 2023-02-11T20:55:17Z 2023-02-11T20:55:17Z CONTRIBUTOR      

What happened?

The xarray.plot.scatter method has changed its behavior in 2022.10.0. Code that used to work now doesn't.

The issue seems to be related to default values for the size and hue of markers. Instead of using s and c arguments, xarray tries to infer DataArrays to use, but picks non-sensical values.

What did you expect to happen?

A scatter plot with default size and color for markers. Now xarray has inferred that the size is somehow related to the j dimension, and the hue to bnds.

Note that the calculations required to draw the figure with those defaults take a huge amount of time. In the example below, I've subsetted the file so the code runs in a short time. Without subsetting, it runs forever.

Minimal Complete Verifiable Example

Python import xarray as xr from matplotlib import pyplot as plt url = "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/testdata/xclim/cmip6/sic_SImon_CCCma-CanESM5_ssp245_r13i1p2f1_2020.nc" ds = xr.open_dataset(url) t = ds.isel(i=slice(0,10), j=slice(0,11)) t.plot.scatter(x="longitude", y="latitude", s=1) plt.show()

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-131-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: ('en_CA', 'UTF-8') libhdf5: 1.10.4 libnetcdf: 4.7.3 xarray: 2022.10.0 pandas: 1.4.3 numpy: 1.21.4 scipy: None netCDF4: 1.5.7 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.5 dask: 2022.7.0 distributed: None matplotlib: 3.6.2 cartopy: None seaborn: None numbagg: None fsspec: 2022.5.0 cupy: None pint: 0.20.1 sparse: None flox: None numpy_groupies: None setuptools: 62.6.0 pip: 22.0.4 conda: None pytest: 7.1.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7314/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
937160239 MDU6SXNzdWU5MzcxNjAyMzk= 5576 Slicing bug with pandas 1.3 and CFTimeIndex huard 81219 closed 0     2 2021-07-05T14:44:17Z 2021-07-05T16:50:30Z 2021-07-05T16:50:30Z CONTRIBUTOR      

What happened: Slicing into a DataArray along time with a CFTimeIndex fails since upgrade to pandas 1.3

What you expected to happen: The usual.

Minimal Complete Verifiable Example:

python import xarray as xr t = xr.cftime_range('2000-01-01', '2030-12-31', freq='D', calendar='noleap') ref = xr.DataArray(range(len(t)), dims=('time',), coords={'time': t}) ref.sel(time=slice(None, "2015-01-01"))

```python

TypeError Traceback (most recent call last) <ipython-input-5-3afe7d577940> in <module> ----> 1 ref.sel(time=slice(None, "2015-01-01"))

~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 1269 Dimensions without coordinates: points 1270 """ -> 1271 ds = self._to_temp_dataset().sel( 1272 indexers=indexers, 1273 drop=drop,

~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2363 """ 2364 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") -> 2365 pos_indexers, new_indexes = remap_label_indexers( 2366 self, indexers=indexers, method=method, tolerance=tolerance 2367 )

~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 419 } 420 --> 421 pos_indexers, new_indexes = indexing.remap_label_indexers( 422 obj, v_indexers, method=method, tolerance=tolerance 423 )

~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 272 coords_dtype = data_obj.coords[dim].dtype 273 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 274 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 275 pos_indexers[dim] = idxr 276 if new_idx is not None:

~/.conda/envs/xclim/lib/python3.8/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance) 119 "cannot use method argument if any indexers are slice objects" 120 ) --> 121 indexer = index.slice_indexer( 122 _sanitize_slice_element(label.start), 123 _sanitize_slice_element(label.stop),

~/.conda/envs/xclim/lib/python3.8/site-packages/pandas/core/indexes/base.py in slice_indexer(self, start, end, step, kind) 5684 slice(1, 3, None) 5685 """ -> 5686 start_slice, end_slice = self.slice_locs(start, end, step=step) 5687 5688 # return a slice

~/.conda/envs/xclim/lib/python3.8/site-packages/pandas/core/indexes/base.py in slice_locs(self, start, end, step, kind) 5892 end_slice = None 5893 if end is not None: -> 5894 end_slice = self.get_slice_bound(end, "right") 5895 if end_slice is None: 5896 end_slice = len(self)

~/.conda/envs/xclim/lib/python3.8/site-packages/pandas/core/indexes/base.py in get_slice_bound(self, label, side, kind) 5796 # For datetime indices label may be a string that has to be converted 5797 # to datetime boundary according to its resolution. -> 5798 label = self._maybe_cast_slice_bound(label, side) 5799 5800 # we need to look up the label

TypeError: _maybe_cast_slice_bound() missing 1 required positional argument: 'kind' ```

Anything else we need to know?: A quick diagnostic suggests that convert_label_indexer should call pandas.Index.slice_indexer with a kind argument.

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Jan 25 2021, 23:21:18) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-77-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: ('en_CA', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.18.2 pandas: 1.3.0 numpy: 1.20.0 scipy: 1.6.3 netCDF4: 1.5.5.1 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.4.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.01.1 distributed: 2021.01.1 matplotlib: 3.4.2 cartopy: None seaborn: None numbagg: None pint: 0.16.1 setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: 6.2.2 IPython: 7.20.0 sphinx: 4.0.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5576/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
741100024 MDExOlB1bGxSZXF1ZXN0NTE5NDc0NzQz 4573 Update xESMF link to pangeo-xesmf in related-projects huard 81219 closed 0     1 2020-11-11T22:00:34Z 2020-11-12T14:54:08Z 2020-11-12T14:53:56Z CONTRIBUTOR   0 pydata/xarray/pulls/4573

The new link is where development now occurs.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4573/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
369639339 MDU6SXNzdWUzNjk2MzkzMzk= 2481 Implement CFPeriodIndex huard 81219 closed 0     3 2018-10-12T17:20:04Z 2020-11-02T01:26:48Z 2020-11-02T01:26:48Z CONTRIBUTOR      

A CFPeriodIndex supporting non-standard calendars would be useful to facilitate climate analyses. The use case for me would be to find the start and end date of a resampling group. This is useful to spot missing values in a resampled time series, or to create time_bnds arrays in a netCDF file.

``` import xarray as xr import pandas as pd

cftime = xr.cftime_range(start='2000-01-01', periods=361, freq='D', calendar='360_day') pdtime = pd.date_range(start='2000-01-01', periods=361, freq='D')

cf_da = xr.DataArray(range(361), coords={'time': cftime}, dims='time') pd_da = xr.DataArray(range(361), coords={'time': pdtime}, dims='time')

cf_c = cf_da.resample(time='M').count()

pd_c = pd_da.resample(time='M').count()

cf_p = cf_c.indexes['time'].to_period()

pd_p = pd_c.indexes['time'].to_period()

cf_expected_days_in_group = cf_p.end_time - cf_p.start_time + pd.offsets.Day(1)

pd_expected_days_in_group = pd_p.end_time - pd_p.start_time + pd.offsets.Day(1) ```

Depends on #2191

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2481/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
492966281 MDU6SXNzdWU0OTI5NjYyODE= 3304 DataArray.quantile does not honor `keep_attrs` huard 81219 closed 0     3 2019-09-12T18:39:47Z 2020-04-05T18:56:30Z 2019-09-15T22:16:15Z CONTRIBUTOR      

MCVE Code Sample

```python

Your code here

import xarray as xr
da = xr.DataArray([0, 0], dims="x", attrs={'units':'K'})
out = da.quantile(.9, dim='x', keep_attrs=True)
out.attrs
returns OrderedDict() ```

Expected Output

OrderedDict([('units', 'K')])

Output of xr.show_versions()

# Paste the output here xr.show_versions() here INSTALLED VERSIONS ------------------ commit: 69c7e01e5167a3137c285cb50d1978252bb8bcbf python: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-60-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.1 xarray: 0.12.3+88.g69c7e01e.dirty pandas: 0.23.4 numpy: 1.16.1 scipy: 1.1.0 netCDF4: 1.3.1 pydap: installed h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 0.19.0 distributed: 1.23.0 matplotlib: 3.0.2 cartopy: 0.17.0 seaborn: None numbagg: None setuptools: 41.0.0 pip: 9.0.1 conda: None pytest: 4.4.0 IPython: 7.0.1 sphinx: 1.7.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3304/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
561210241 MDExOlB1bGxSZXF1ZXN0MzcyMDYyNTM2 3758 Fix interp bug when indexer shares coordinates with array huard 81219 closed 0     4 2020-02-06T19:06:22Z 2020-03-13T13:58:38Z 2020-03-13T13:58:38Z CONTRIBUTOR   0 pydata/xarray/pulls/3758
  • [x] Closes #3252
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Replaces #3262 (I think).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3758/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
538620718 MDExOlB1bGxSZXF1ZXN0MzUzNzM1MDM4 3631 Add support for CFTimeIndex in get_clean_interp_index huard 81219 closed 0     11 2019-12-16T19:57:24Z 2020-01-26T18:36:24Z 2020-01-26T14:10:37Z CONTRIBUTOR   0 pydata/xarray/pulls/3631
  • [x] Closes #3641
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Related to #3349

As suggested by @spencerkclark, index values are computed as a delta with respect to 1970-01-01.

At the moment, this fails if dates fall outside of the range for nanoseconds timedeltas [ 1678 AD, 2262 AD]. Is this something we can fix ?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3631/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
539821504 MDExOlB1bGxSZXF1ZXN0MzU0NzMwNzI5 3642 Make datetime_to_numeric more robust to overflow errors huard 81219 closed 0     1 2019-12-18T17:34:41Z 2020-01-20T19:21:49Z 2020-01-20T19:21:49Z CONTRIBUTOR   0 pydata/xarray/pulls/3642
  • [x] Closes #3641
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API

This is likely only safe with NumPy>=1.17 though.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3642/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
492987154 MDExOlB1bGxSZXF1ZXN0MzE3MDU0MjUz 3305 Honor `keep_attrs` in DataArray.quantile huard 81219 closed 0     1 2019-09-12T19:27:14Z 2019-09-15T22:16:27Z 2019-09-15T22:16:15Z CONTRIBUTOR   0 pydata/xarray/pulls/3305
  • [x] Closes #3304
  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Note that I've set the default to True (if keep_attrs is None). This sounded reasonable since quantiles share the same units and properties as the original array, but I can switch it to False if that's the usual default.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3305/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
455262061 MDU6SXNzdWU0NTUyNjIwNjE= 3018 Add quantile method to groupby object huard 81219 closed 0     4 2019-06-12T14:54:35Z 2019-07-02T16:23:57Z 2019-06-24T15:21:29Z CONTRIBUTOR      

Dataset and DataArray objects have a quantile method, but not GroupBy. This would be useful for climatological analyses.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3018/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
461088361 MDU6SXNzdWU0NjEwODgzNjE= 3047 Assign attributes to DataArrays when creating dataset with PydapDataStore + subsetting huard 81219 closed 0     2 2019-06-26T17:14:10Z 2019-06-27T12:02:34Z 2019-06-27T12:02:33Z CONTRIBUTOR      

MCVE Code Sample

```python import xarray as xr

PyDAP access without subsetting - everything's fine

url = 'http://remotetest.unidata.ucar.edu/thredds/dodsC/testdods/coads_climatology.nc' ds = xr.open_dataset(url, engine='pydap', decode_times=False) ds.TIME.units # yields 'hour since 0000-01-01 00:00:00'

PyDAP access with subsetting - variable attributes are global...

dss = xr.open_dataset(url+'?SST[0:1:11][0:1:0][0:1:0]', engine='pydap', decode_times=False) print(dss.SST.attrs) # all good so far print(dss.TIME.attrs) # oh oh... nothing print(dss.attrs['TIME.units']) ```

Problem Description

Opening a subsetted dataset with PydapDataStore creates global attributes instead of variable attributes.

Expected Output

All the global TIME.* attributes should be attributes of the TIME DataArray.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-50-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_CA.UTF-8 LOCALE: en_CA.UTF-8 libhdf5: 1.10.2 libnetcdf: 4.6.1 xarray: 0.12.1+60.g6fc855fb pandas: 0.23.4 numpy: 1.16.1 scipy: 1.1.0 netCDF4: 1.3.1 pydap: installed h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 0.19.0 distributed: 1.23.0 matplotlib: 3.0.2 cartopy: 0.17.0 seaborn: None setuptools: 41.0.0 pip: 9.0.1 conda: None pytest: 4.4.0 IPython: 7.0.1 sphinx: 1.7.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3047/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
423405197 MDExOlB1bGxSZXF1ZXN0MjYyOTgzOTcz 2828 Add quantile method to GroupBy huard 81219 closed 0     6 2019-03-20T18:20:41Z 2019-06-24T15:21:36Z 2019-06-24T15:21:29Z CONTRIBUTOR   0 pydata/xarray/pulls/2828
  • [x] Tests added
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Fixes #3018

Note that I've added an unrelated test that exposes an issue with grouping when there is only one element per group.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2828/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 27.877ms · About: xarray-datasette