home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "NONE" and issue = 517338735 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • mazzma12 2
  • k-a-mendoza 1
  • friedrichknuth 1

issue 1

  • Need documentation on sparse / cupy integration · 4 ✖

author_association 1

  • NONE · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
627316607 https://github.com/pydata/xarray/issues/3484#issuecomment-627316607 https://api.github.com/repos/pydata/xarray/issues/3484 MDEyOklzc3VlQ29tbWVudDYyNzMxNjYwNw== mazzma12 23187108 2020-05-12T12:39:07Z 2020-05-12T12:39:07Z NONE

da.copy(data=da.data.todense()).plot() should work.

It works indeed, thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Need documentation on sparse / cupy integration 517338735
626787590 https://github.com/pydata/xarray/issues/3484#issuecomment-626787590 https://api.github.com/repos/pydata/xarray/issues/3484 MDEyOklzc3VlQ29tbWVudDYyNjc4NzU5MA== mazzma12 23187108 2020-05-11T15:49:24Z 2020-05-11T15:49:24Z NONE

Hello, do you have any documentation on how to plot data in a sparse array using xarray.plot accessor? I get this error, but if I convert to numpy/scipy with todense() method I will likely lose the convenient plot method from xarray... Thank you for your help ```python


RuntimeError Traceback (most recent call last) <ipython-input-331-9d69abf57c11> in <module> ----> 1 slice_res_ds['value'].plot()

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/plot/plot.py in call(self, kwargs) 463 464 def call(self, kwargs): --> 465 return plot(self._da, **kwargs) 466 467 @functools.wraps(hist)

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/plot/plot.py in plot(darray, row, col, col_wrap, ax, hue, rtol, subplot_kws, kwargs) 200 kwargs["ax"] = ax 201 --> 202 return plotfunc(darray, kwargs) 203 204

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/plot/plot.py in newplotfunc(darray, x, y, figsize, size, aspect, ax, row, col, col_wrap, xincrease, yincrease, add_colorbar, add_labels, vmin, vmax, cmap, center, robust, extend, levels, infer_intervals, colors, subplot_kws, cbar_ax, cbar_kwargs, xscale, yscale, xticks, yticks, xlim, ylim, norm, **kwargs) 692 693 # Pass the data as a masked ndarray too --> 694 zval = darray.to_masked_array(copy=False) 695 696 # Replace pd.Intervals if contained in xval or yval.

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/core/dataarray.py in to_masked_array(self, copy) 2301 Masked where invalid values (nan or inf) occur. 2302 """ -> 2303 values = self.values # only compute lazy arrays once 2304 isnull = pd.isnull(values) 2305 return np.ma.MaskedArray(data=values, mask=isnull, copy=copy)

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/core/dataarray.py in values(self) 565 def values(self) -> np.ndarray: 566 """The array's data as a numpy.ndarray""" --> 567 return self.variable.values 568 569 @values.setter

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/core/variable.py in values(self) 446 def values(self): 447 """The variable's data as a numpy.ndarray""" --> 448 return _as_array_or_item(self._data) 449 450 @values.setter

~/.pyenv/versions/emi/lib/python3.6/site-packages/xarray/core/variable.py in _as_array_or_item(data) 252 TODO: remove this (replace with np.asarray) once these issues are fixed 253 """ --> 254 data = np.asarray(data) 255 if data.ndim == 0: 256 if data.dtype.kind == "M":

~/.pyenv/versions/emi/lib/python3.6/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87

~/.pyenv/versions/emi/lib/python3.6/site-packages/sparse/_sparse_array.py in array(self, **kwargs) 221 if not AUTO_DENSIFY: 222 raise RuntimeError( --> 223 "Cannot convert a sparse array to dense automatically. " 224 "To manually densify, use the todense method." 225 )

RuntimeError: Cannot convert a sparse array to dense automatically. To manually densify, use the todense method. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Need documentation on sparse / cupy integration 517338735
549627590 https://github.com/pydata/xarray/issues/3484#issuecomment-549627590 https://api.github.com/repos/pydata/xarray/issues/3484 MDEyOklzc3VlQ29tbWVudDU0OTYyNzU5MA== friedrichknuth 10554254 2019-11-05T01:50:29Z 2020-02-12T02:51:51Z NONE

After reading through the issue tracker and PRs, it looks like sparse arrays can safely be wrapped with xarray, thanks to the work done in PR#3117, but built-in functions are still under development (e.g. PR#3542). As a user, here is what I am seeing when test driving sparse:

Sparse gives me a smaller in-memory array

```python In [1]: import xarray as xr, sparse, sys, numpy as np, dask.array as da

In [2]: x = np.random.random((100, 100, 100))

In [3]: x[x < 0.9] = np.nan

In [4]: s = sparse.COO.from_numpy(x, fill_value=np.nan)

In [5]: sys.getsizeof(s) Out[5]: 3189592

In [6]: sys.getsizeof(x) Out[6]: 8000128 ``` Which I can wrap with dask and xarray

```python In [7]: x = da.from_array(x)

In [8]: s = da.from_array(s)

In [9]: ds_dense = xr.DataArray(x).to_dataset(name='data_variable')

In [10]: ds_sparse = xr.DataArray(s).to_dataset(name='data_variable')

In [11]: ds_dense Out[11]: <xarray.Dataset> Dimensions: (dim_0: 100, dim_1: 100, dim_2: 100) Dimensions without coordinates: dim_0, dim_1, dim_2 Data variables: data_variable (dim_0, dim_1, dim_2) float64 dask.array<chunksize=(100, 100, 100), meta=np.ndarray>

In [12]: ds_sparse Out[12]: <xarray.Dataset> Dimensions: (dim_0: 100, dim_1: 100, dim_2: 100) Dimensions without coordinates: dim_0, dim_1, dim_2 Data variables: data_variable (dim_0, dim_1, dim_2) float64 dask.array<chunksize=(100, 100, 100), meta=sparse.COO> ``` However, computation on a sparse array takes longer than running compute on a dense array (which I think is expected...?)

```python In [13]: %%time ...: ds_sparse.mean().compute() CPU times: user 487 ms, sys: 22.9 ms, total: 510 ms Wall time: 518 ms Out[13]: <xarray.Dataset> Dimensions: () Data variables: data_variable float64 0.9501

In [14]: %%time ...: ds_dense.mean().compute() CPU times: user 10.9 ms, sys: 3.91 ms, total: 14.8 ms Wall time: 13.8 ms Out[14]: <xarray.Dataset> Dimensions: () Data variables: data_variable float64 0.9501 ```

And writing to netcdf, to take advantage of the smaller data size, doesn't work out of the box (yet)

python In [15]: ds_sparse.to_netcdf('ds_sparse.nc') Out[15]: ... RuntimeError: Cannot convert a sparse array to dense automatically. To manually densify, use the todense method.

Additional discussion happening at #3213

@dcherian @shoyer Am I missing any built-in methods that are working and ready for public release? Happy to send in a PR, if any of what is provided here should go into a basic example for the docs.

At this stage, I am not using sparse arrays for my own research just yet, but when I get to that anticipated phase I can dig in more on this and hopefully send in some useful PRs for improved documentation and fixes/features.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Need documentation on sparse / cupy integration 517338735
550471849 https://github.com/pydata/xarray/issues/3484#issuecomment-550471849 https://api.github.com/repos/pydata/xarray/issues/3484 MDEyOklzc3VlQ29tbWVudDU1MDQ3MTg0OQ== k-a-mendoza 4605410 2019-11-06T19:48:17Z 2019-11-06T19:48:17Z NONE

@friedrichknuth One of my motivations behind exploring sparse DataArray backends is in reducing the memory footprint during merge operations. Consider the following:

One can imagine many such merge operations producing a lot of effectively empty indices. While sparse backed arrays might have the ability to condense these empty indices in memory, it seems like xarray sparse merging isnt quite compatible yet.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Need documentation on sparse / cupy integration 517338735

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 11.329ms · About: xarray-datasette