home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

20 rows where user = 13190237 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 9

  • Array indexing with dask arrays 4
  • Allow invalid_netcdf=True in to_netcdf() 4
  • Make argmin/max work lazy with dask 4
  • Cannot use xarrays own times for indexing 2
  • ``argmax()`` causes dask to compute 2
  • indexing/groupby fails on array opened with chunks from netcdf 1
  • Add writing complex data to docs 1
  • What should pad do about IndexVariables? 1
  • Fix/apply ufunc meta dtype 1

user 1

  • ulijh · 20 ✖

author_association 1

  • CONTRIBUTOR 20
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
625109242 https://github.com/pydata/xarray/issues/3868#issuecomment-625109242 https://api.github.com/repos/pydata/xarray/issues/3868 MDEyOklzc3VlQ29tbWVudDYyNTEwOTI0Mg== ulijh 13190237 2020-05-07T08:27:26Z 2020-05-07T08:27:26Z CONTRIBUTOR

Thanks for implementing this! This is a feature, that we will be using for sure! Mostly with indices of type 1 which, in many cases, can easily be extrapolated. Having this as a default or a switch to enable extrapolation where possible would help a lot!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  What should pad do about IndexVariables? 584461380
623607232 https://github.com/pydata/xarray/pull/4022#issuecomment-623607232 https://api.github.com/repos/pydata/xarray/issues/4022 MDEyOklzc3VlQ29tbWVudDYyMzYwNzIzMg== ulijh 13190237 2020-05-04T17:45:40Z 2020-05-04T17:45:40Z CONTRIBUTOR

Thanks @mathause , this works for me. Also with a change of dtype in func, as long as "output_dtypes" is set correctly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix/apply ufunc meta dtype 611437250
529945663 https://github.com/pydata/xarray/issues/3297#issuecomment-529945663 https://api.github.com/repos/pydata/xarray/issues/3297 MDEyOklzc3VlQ29tbWVudDUyOTk0NTY2Mw== ulijh 13190237 2019-09-10T13:52:59Z 2019-09-10T13:52:59Z CONTRIBUTOR

I am in the exact same situation. @DerWeh with the current master you can do da.to_netcdf("complex.nc", engine="h5netcdf", invalid_netcdf=True) which works for me until there is engine="hdf5" or may be a method da.to_hdf()?

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add writing complex data to docs 491215043
525634152 https://github.com/pydata/xarray/issues/2511#issuecomment-525634152 https://api.github.com/repos/pydata/xarray/issues/2511 MDEyOklzc3VlQ29tbWVudDUyNTYzNDE1Mg== ulijh 13190237 2019-08-28T08:12:13Z 2019-08-28T08:12:13Z CONTRIBUTOR

I think the problem is somewhere here:

https://github.com/pydata/xarray/blob/aaeea6250b89e3605ee1d1a160ad50d6ed657c7e/xarray/core/utils.py#L85-L103

I don't think pandas.Index can hold lazy arrays. Could there be a way around exploiting dask.dataframe indexing methods?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
525404314 https://github.com/pydata/xarray/pull/3244#issuecomment-525404314 https://api.github.com/repos/pydata/xarray/issues/3244 MDEyOklzc3VlQ29tbWVudDUyNTQwNDMxNA== ulijh 13190237 2019-08-27T17:31:41Z 2019-08-27T17:31:41Z CONTRIBUTOR

Thanks guys!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make argmin/max work lazy with dask 484212164
525365481 https://github.com/pydata/xarray/pull/3244#issuecomment-525365481 https://api.github.com/repos/pydata/xarray/issues/3244 MDEyOklzc3VlQ29tbWVudDUyNTM2NTQ4MQ== ulijh 13190237 2019-08-27T15:53:46Z 2019-08-27T15:53:46Z CONTRIBUTOR

May be this is a stupid question: What's the best way to resolve this conflict, and get the checks to run? Should I do a merge master? Or rebase (seems somewhat of a pain to get the remote up to date)? Thanks for the advice!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make argmin/max work lazy with dask 484212164
524277968 https://github.com/pydata/xarray/pull/3244#issuecomment-524277968 https://api.github.com/repos/pydata/xarray/issues/3244 MDEyOklzc3VlQ29tbWVudDUyNDI3Nzk2OA== ulijh 13190237 2019-08-23T11:19:45Z 2019-08-23T11:21:03Z CONTRIBUTOR

Guys, if you could have a look at the tests I modified (a4c3622) to check how many times things get computed. I tried to integrate it with the existing tests. The number of count check could possibly be applied to some other tests as well. May be there is a smarter way of doing this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make argmin/max work lazy with dask 484212164
524078960 https://github.com/pydata/xarray/pull/3244#issuecomment-524078960 https://api.github.com/repos/pydata/xarray/issues/3244 MDEyOklzc3VlQ29tbWVudDUyNDA3ODk2MA== ulijh 13190237 2019-08-22T21:10:27Z 2019-08-22T21:10:27Z CONTRIBUTOR

Thanks @max-sixty! Sure, I'll do this tomorrow then.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Make argmin/max work lazy with dask 484212164
524075569 https://github.com/pydata/xarray/issues/3237#issuecomment-524075569 https://api.github.com/repos/pydata/xarray/issues/3237 MDEyOklzc3VlQ29tbWVudDUyNDA3NTU2OQ== ulijh 13190237 2019-08-22T21:00:34Z 2019-08-22T21:00:34Z CONTRIBUTOR

Thanks @shoyer. Cool, then this was easier than I expected. I added the patch and nanargmax/min to the nputils in #3244. What do you think?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ``argmax()`` causes dask to compute 483280810
524056506 https://github.com/pydata/xarray/pull/3221#issuecomment-524056506 https://api.github.com/repos/pydata/xarray/issues/3221 MDEyOklzc3VlQ29tbWVudDUyNDA1NjUwNg== ulijh 13190237 2019-08-22T20:04:29Z 2019-08-22T20:04:29Z CONTRIBUTOR

Thanks @dcherian and @shoyer for the review and advice! It seems the docs build and the checks are fine now. One more thing, as this is my first PR to xarray: Should I merge/rebase master into the allow_invalid_netcdf branch, or will you guys take it from this point?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow invalid_netcdf=True in to_netcdf() 481110823
523952983 https://github.com/pydata/xarray/issues/3237#issuecomment-523952983 https://api.github.com/repos/pydata/xarray/issues/3237 MDEyOklzc3VlQ29tbWVudDUyMzk1Mjk4Mw== ulijh 13190237 2019-08-22T15:22:33Z 2019-08-22T15:22:33Z CONTRIBUTOR

Those little changes do solve the MCVE, but break at least one test. I don't have enough of an understanding of the (nan)ops logic in xarray to get around the issue. But may be this helps:

The change

``` diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py index 9ba4eae2..784a1d01 100644 --- a/xarray/core/nanops.py +++ b/xarray/core/nanops.py @@ -91,17 +91,9 @@ def nanargmin(a, axis=None): fill_value = dtypes.get_pos_infinity(a.dtype) if a.dtype.kind == "O": return _nan_argminmax_object("argmin", fill_value, a, axis=axis) - a, mask = _replace_nan(a, fill_value) - if isinstance(a, dask_array_type): - res = dask_array.argmin(a, axis=axis) - else: - res = np.argmin(a, axis=axis)

  • if mask is not None:
  • mask = mask.all(axis=axis)
  • if mask.any():
  • raise ValueError("All-NaN slice encountered")
  • return res
  • module = dask_array if isinstance(a, dask_array_type) else nputils
  • return module.nanargmin(a, axis=axis)

def nanargmax(a, axis=None): @@ -109,17 +101,8 @@ def nanargmax(a, axis=None): if a.dtype.kind == "O": return _nan_argminmax_object("argmax", fill_value, a, axis=axis)

  • a, mask = _replace_nan(a, fill_value)
  • if isinstance(a, dask_array_type):
  • res = dask_array.argmax(a, axis=axis)
  • else:
  • res = np.argmax(a, axis=axis)

  • if mask is not None:
  • mask = mask.all(axis=axis)
  • if mask.any():
  • raise ValueError("All-NaN slice encountered")
  • return res
  • module = dask_array if isinstance(a, dask_array_type) else nputils
  • return module.nanargmax(a, axis=axis)

def nansum(a, axis=None, dtype=None, out=None, min_count=None):

```

The failing test

``` python
...
__ TestVariable.testreduce __
...

def f(values, axis=None, skipna=None, **kwargs):
    if kwargs.pop("out", None) is not None:
        raise TypeError("`out` is not valid for {}".format(name))

    values = asarray(values)

    if coerce_strings and values.dtype.kind in "SU":
        values = values.astype(object)

    func = None
    if skipna or (skipna is None and values.dtype.kind in "cfO"):
        nanname = "nan" + name
        func = getattr(nanops, nanname)
    else:
        func = _dask_or_eager_func(name)

    try:
        return func(values, axis=axis, **kwargs)
    except AttributeError:
        if isinstance(values, dask_array_type):
            try:  # dask/dask#3133 dask sometimes needs dtype argument
                # if func does not accept dtype, then raises TypeError
                return func(values, axis=axis, dtype=values.dtype, **kwargs)
            except (AttributeError, TypeError):
                msg = "%s is not yet implemented on dask arrays" % name
        else:
            msg = (
                "%s is not available with skipna=False with the "
                "installed version of numpy; upgrade to numpy 1.12 "
                "or newer to use skipna=True or skipna=None" % name
            )
      raise NotImplementedError(msg)

E NotImplementedError: argmax is not available with skipna=False with the installed version of numpy; upgrade to numpy 1.12 or newer to use skipna=True or skipna=None

... ``` Note: I habe numpy 1.17 instaleed so the error msg here seems missleading.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ``argmax()`` causes dask to compute 483280810
523819932 https://github.com/pydata/xarray/pull/3221#issuecomment-523819932 https://api.github.com/repos/pydata/xarray/issues/3221 MDEyOklzc3VlQ29tbWVudDUyMzgxOTkzMg== ulijh 13190237 2019-08-22T09:08:28Z 2019-08-22T09:08:28Z CONTRIBUTOR

Hey guys, I'd like to move this forward, but the doc build failed with the message below.

  • Do I need to add something specific to make the docs build?
  • Do I need to do anything else to get this PR merged?

Thanks!

``` python


Exception in /home/vsts/work/1/s/doc/io.rst at block ending on line 388 Specify :okexcept: as an option in the ipython:: block to suppress this message


ModuleNotFoundError Traceback (most recent call last) <ipython-input-17-9100cd49113c> in <module> ----> 1 da.to_netcdf("complex.nc", engine="h5netcdf", invalid_netcdf=True)

~/work/1/s/xarray/core/dataarray.py in to_netcdf(self, args, kwargs) 2214 dataset = self.to_dataset() 2215 -> 2216 return dataset.to_netcdf(args, **kwargs) 2217 2218 def to_dict(self, data: bool = True) -> dict:

~/work/1/s/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute, invalid_netcdf) 1519 unlimited_dims=unlimited_dims, 1520 compute=compute, -> 1521 invalid_netcdf=invalid_netcdf, 1522 ) 1523

~/work/1/s/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile, invalid_netcdf) 1052 "unrecognized option 'invalid_netcdf' for engine %s" % engine 1053 ) -> 1054 store = store_open(target, mode, format, group, **kwargs) 1055 1056 if unlimited_dims is None:

~/work/1/s/xarray/backends/h5netcdf_.py in init(self, filename, mode, format, group, lock, autoclose, invalid_netcdf) 81 invalid_netcdf=None, 82 ): ---> 83 import h5netcdf 84 85 if format not in [None, "NETCDF4"]:

ModuleNotFoundError: No module named 'h5netcdf' <<<-------------------------------------------------------------------------

/home/vsts/work/1/s/xarray/core/dataarray.py:docstring of xarray.DataArray.integrate:12: WARNING: Unexpected indentation. /home/vsts/work/1/s/xarray/core/dataarray.py:docstring of xarray.DataArray.interp:20: WARNING: Inline strong start-string without end-string. /home/vsts/work/1/s/xarray/core/dataarray.py:docstring of xarray.DataArray.interpolate_na:8: WARNING: Definition list ends without a blank line; unexpected unindent.

Sphinx parallel build error: RuntimeError: Non Expected exception in /home/vsts/work/1/s/doc/io.rst line 388

[error]The operation was canceled.

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow invalid_netcdf=True in to_netcdf() 481110823
522986699 https://github.com/pydata/xarray/issues/2511#issuecomment-522986699 https://api.github.com/repos/pydata/xarray/issues/2511 MDEyOklzc3VlQ29tbWVudDUyMjk4NjY5OQ== ulijh 13190237 2019-08-20T12:15:18Z 2019-08-20T18:52:49Z CONTRIBUTOR

Even though the example from above does work, sadly, the following does not: python import xarray as xr import dask.array as da import numpy as np da = xr.DataArray(np.random.rand(3*4*5).reshape((3,4,5))).chunk(dict(dim_0=1)) idcs = da.argmax('dim_2') da[dict(dim_2=idcs)] results in ``` python


TypeError Traceback (most recent call last) <ipython-input-4-3542cdd6d61c> in <module> ----> 1 da[dict(dim_2=idcs)]

~/src/xarray/xarray/core/dataarray.py in getitem(self, key) 604 else: 605 # xarray-style array indexing --> 606 return self.isel(indexers=self._item_key_to_dict(key)) 607 608 def setitem(self, key: Any, value: Any) -> None:

~/src/xarray/xarray/core/dataarray.py in isel(self, indexers, drop, **indexers_kwargs) 986 """ 987 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "isel") --> 988 ds = self._to_temp_dataset().isel(drop=drop, indexers=indexers) 989 return self._from_temp_dataset(ds) 990

~/src/xarray/xarray/core/dataset.py in isel(self, indexers, drop, **indexers_kwargs) 1901 indexes[name] = new_index 1902 else: -> 1903 new_var = var.isel(indexers=var_indexers) 1904 1905 variables[name] = new_var

~/src/xarray/xarray/core/variable.py in isel(self, indexers, drop, **indexers_kwargs) 984 if dim in indexers: 985 key[i] = indexers[dim] --> 986 return self[tuple(key)] 987 988 def squeeze(self, dim=None):

~/src/xarray/xarray/core/variable.py in getitem(self, key) 675 array x.values directly. 676 """ --> 677 dims, indexer, new_order = self._broadcast_indexes(key) 678 data = as_indexable(self._data)[indexer] 679 if new_order:

~/src/xarray/xarray/core/variable.py in _broadcast_indexes(self, key) 532 if isinstance(k, Variable): 533 if len(k.dims) > 1: --> 534 return self._broadcast_indexes_vectorized(key) 535 dims.append(k.dims[0]) 536 elif not isinstance(k, integer_types):

~/src/xarray/xarray/core/variable.py in _broadcast_indexes_vectorized(self, key) 660 new_order = None 661 --> 662 return out_dims, VectorizedIndexer(tuple(out_key)), new_order 663 664 def getitem(self, key):

~/src/xarray/xarray/core/indexing.py in init(self, key) 460 raise TypeError( 461 "unexpected indexer type for {}: {!r}".format( --> 462 type(self).name, k 463 ) 464 )

TypeError: unexpected indexer type for VectorizedIndexer: dask.array<arg_agg-aggregate, shape=(3, 4), dtype=int64, chunksize=(1, 4)> ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
522237633 https://github.com/pydata/xarray/pull/3221#issuecomment-522237633 https://api.github.com/repos/pydata/xarray/issues/3221 MDEyOklzc3VlQ29tbWVudDUyMjIzNzYzMw== ulijh 13190237 2019-08-17T13:38:15Z 2019-08-17T13:38:15Z CONTRIBUTOR

Hmm, seems like I broke the docs somehow... I someone could advice me how to fix? Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow invalid_netcdf=True in to_netcdf() 481110823
521918496 https://github.com/pydata/xarray/pull/3221#issuecomment-521918496 https://api.github.com/repos/pydata/xarray/issues/3221 MDEyOklzc3VlQ29tbWVudDUyMTkxODQ5Ng== ulijh 13190237 2019-08-16T07:41:14Z 2019-08-16T07:41:34Z CONTRIBUTOR

This will also need a note in whats-new.rst and a note in io.rst, perhaps under "Writing encoded data"

Sure, I'll add that when we are happy with the code and tests!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow invalid_netcdf=True in to_netcdf() 481110823
498178025 https://github.com/pydata/xarray/issues/2511#issuecomment-498178025 https://api.github.com/repos/pydata/xarray/issues/2511 MDEyOklzc3VlQ29tbWVudDQ5ODE3ODAyNQ== ulijh 13190237 2019-06-03T09:13:49Z 2019-06-03T09:13:49Z CONTRIBUTOR

As of version 0.12 indexing with dask arrays works out of the box... I think this can be closed now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
433304954 https://github.com/pydata/xarray/issues/2511#issuecomment-433304954 https://api.github.com/repos/pydata/xarray/issues/2511 MDEyOklzc3VlQ29tbWVudDQzMzMwNDk1NA== ulijh 13190237 2018-10-26T06:48:54Z 2018-10-26T06:48:54Z CONTRIBUTOR

It seem's working fine with the following change but it has a lot of dublicated code... ``` diff --git a/xarray/core/indexing.py b/xarray/core/indexing.py index d51da471..9fe93581 100644 --- a/xarray/core/indexing.py +++ b/xarray/core/indexing.py @@ -7,6 +7,7 @@ from datetime import timedelta

import numpy as np import pandas as pd +import dask.array as da

from . import duck_array_ops, nputils, utils from .pycompat import ( @@ -420,6 +421,19 @@ class VectorizedIndexer(ExplicitIndexer): 'have different numbers of dimensions: {}' .format(ndims)) k = np.asarray(k, dtype=np.int64) + elif isinstance(k, dask_array_type): + if not np.issubdtype(k.dtype, np.integer): + raise TypeError('invalid indexer array, does not have ' + 'integer dtype: {!r}'.format(k)) + if ndim is None: + ndim = k.ndim + elif ndim != k.ndim: + ndims = [k.ndim for k in key + if isinstance(k, (np.ndarray) + dask_array_type)] + raise ValueError('invalid indexer key: ndarray arguments ' + 'have different numbers of dimensions: {}' + .format(ndims)) + k = da.array(k, dtype=np.int64) else: raise TypeError('unexpected indexer type for {}: {!r}' .format(type(self).name, k)) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Array indexing with dask arrays 374025325
346157283 https://github.com/pydata/xarray/issues/1240#issuecomment-346157283 https://api.github.com/repos/pydata/xarray/issues/1240 MDEyOklzc3VlQ29tbWVudDM0NjE1NzI4Mw== ulijh 13190237 2017-11-21T20:52:49Z 2017-11-21T20:52:49Z CONTRIBUTOR

@jhamman - thanks, this should be usefull...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot use xarrays own times for indexing 204071440
346037822 https://github.com/pydata/xarray/issues/1240#issuecomment-346037822 https://api.github.com/repos/pydata/xarray/issues/1240 MDEyOklzc3VlQ29tbWVudDM0NjAzNzgyMg== ulijh 13190237 2017-11-21T14:11:36Z 2017-11-21T14:11:36Z CONTRIBUTOR

Hi, this is still the case for version 0.10.0. arr = xr.DataArray(np.random.rand(10, 3), ...: [('time', pd.date_range('2000-01-01', periods=10)), ...: ('space', ['IA', 'IL', 'IN'])]) ...: arr.loc[arr.time[2]:arr.time[5]] fails, but doing the same thing on a pandas dataframe works just fine: dfr = arr.to_dataframe(name='dfr') dfr.loc[arr.time[2]:arr.time[5]] I'd really appreciate see this working on a DataArray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot use xarrays own times for indexing 204071440
332122006 https://github.com/pydata/xarray/issues/1591#issuecomment-332122006 https://api.github.com/repos/pydata/xarray/issues/1591 MDEyOklzc3VlQ29tbWVudDMzMjEyMjAwNg== ulijh 13190237 2017-09-26T08:15:45Z 2017-09-26T08:15:45Z CONTRIBUTOR

Ok, thanks for opening the dask issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  indexing/groupby fails on array opened with chunks from netcdf 260279615

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.997ms · About: xarray-datasette