home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "CONTRIBUTOR", issue = 279909699 and user = 14314623 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jbusecke · 4 ✖

issue 1

  • Error when using .apply_ufunc with .groupby_bins · 4 ✖

author_association 1

  • CONTRIBUTOR · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
351608763 https://github.com/pydata/xarray/issues/1765#issuecomment-351608763 https://api.github.com/repos/pydata/xarray/issues/1765 MDEyOklzc3VlQ29tbWVudDM1MTYwODc2Mw== jbusecke 14314623 2017-12-14T04:54:59Z 2017-12-14T04:54:59Z CONTRIBUTOR

Thank you very much! I will give that a try in the next days.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Error when using .apply_ufunc with .groupby_bins 279909699
350169611 https://github.com/pydata/xarray/issues/1765#issuecomment-350169611 https://api.github.com/repos/pydata/xarray/issues/1765 MDEyOklzc3VlQ29tbWVudDM1MDE2OTYxMQ== jbusecke 14314623 2017-12-08T04:32:10Z 2017-12-08T04:32:10Z CONTRIBUTOR

I added the line bins = xr.DataArray(bins, dims=['dummy']) but the same weird error appeared.

Here is the full tracer for this time (this was when I tried to save a netcdf). I could apply wrapper without error but if I either try to .compute() or .to_netcdf() (below) an error is raised.

```

ValueError Traceback (most recent call last) <timed eval> in <module>()

~/code/src/xarray/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims) 1130 return to_netcdf(self, path, mode, format=format, group=group, 1131 engine=engine, encoding=encoding, -> 1132 unlimited_dims=unlimited_dims) 1133 1134 def unicode(self):

~/code/src/xarray/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, writer, encoding, unlimited_dims) 616 try: 617 dataset.dump_to_store(store, sync=sync, encoding=encoding, --> 618 unlimited_dims=unlimited_dims) 619 if path_or_file is None: 620 return target.getvalue()

~/code/src/xarray/xarray/core/dataset.py in dump_to_store(self, store, encoder, sync, encoding, unlimited_dims) 1069 unlimited_dims=unlimited_dims) 1070 if sync: -> 1071 store.sync() 1072 1073 def to_netcdf(self, path=None, mode='w', format=None, group=None,

~/code/src/xarray/xarray/backends/scipy_.py in sync(self) 206 def sync(self): 207 with self.ensure_open(autoclose=True): --> 208 super(ScipyDataStore, self).sync() 209 self.ds.flush() 210

~/code/src/xarray/xarray/backends/common.py in sync(self) 208 209 def sync(self): --> 210 self.writer.sync() 211 212 def store_dataset(self, dataset):

~/code/src/xarray/xarray/backends/common.py in sync(self) 185 import dask 186 if LooseVersion(dask.version) > LooseVersion('0.8.1'): --> 187 da.store(self.sources, self.targets, lock=GLOBAL_LOCK) 188 else: 189 da.store(self.sources, self.targets)

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/array/core.py in store(sources, targets, lock, regions, compute, kwargs) 898 dsk = sharedict.merge((name, updates), *[src.dask for src in sources]) 899 if compute: --> 900 compute_as_if_collection(Array, dsk, keys, kwargs) 901 else: 902 from ..delayed import Delayed

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/base.py in compute_as_if_collection(cls, dsk, keys, get, kwargs) 210 get = get or _globals['get'] or cls.dask_scheduler 211 dsk2 = optimization_function(cls)(ensure_dict(dsk), keys, kwargs) --> 212 return get(dsk2, keys, **kwargs) 213 214

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/threaded.py in get(dsk, result, cache, num_workers, kwargs) 73 results = get_async(pool.apply_async, len(pool._pool), dsk, result, 74 cache=cache, get_id=_thread_get_id, ---> 75 pack_exception=pack_exception, kwargs) 76 77 # Cleanup pools associated to dead threads

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/local.py in get_async(apply_async, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, **kwargs) 519 _execute_task(task, data) # Re-execute locally 520 else: --> 521 raise_exception(exc, tb) 522 res, worker_id = loads(res_info) 523 state['cache'][key] = res

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/compatibility.py in reraise(exc, tb) 58 if exc.traceback is not tb: 59 raise exc.with_traceback(tb) ---> 60 raise exc 61 62 else:

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/local.py in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 288 try: 289 task, data = loads(task_info) --> 290 result = _execute_task(task, data) 291 id = get_id() 292 result = dumps((result, id))

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/local.py in _execute_task(arg, cache, dsk) 268 elif istask(arg): 269 func, args = arg[0], arg[1:] --> 270 args2 = [_execute_task(a, cache) for a in args] 271 return func(*args2) 272 elif not ishashable(arg):

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/local.py in <listcomp>(.0) 268 elif istask(arg): 269 func, args = arg[0], arg[1:] --> 270 args2 = [_execute_task(a, cache) for a in args] 271 return func(*args2) 272 elif not ishashable(arg):

~/code/miniconda/envs/standard/lib/python3.6/site-packages/dask/local.py in _execute_task(arg, cache, dsk) 269 func, args = arg[0], arg[1:] 270 args2 = [_execute_task(a, cache) for a in args] --> 271 return func(*args2) 272 elif not ishashable(arg): 273 return arg

~/code/miniconda/envs/standard/lib/python3.6/site-packages/numpy/lib/function_base.py in call(self, args, *kwargs) 2737 vargs.extend([kwargs[_n] for _n in names]) 2738 -> 2739 return self._vectorize_call(func=func, args=vargs) 2740 2741 def _get_ufunc_and_otypes(self, func, args):

~/code/miniconda/envs/standard/lib/python3.6/site-packages/numpy/lib/function_base.py in _vectorize_call(self, func, args) 2803 """Vectorized call to func over positional args.""" 2804 if self.signature is not None: -> 2805 res = self._vectorize_call_with_signature(func, args) 2806 elif not args: 2807 res = func()

~/code/miniconda/envs/standard/lib/python3.6/site-packages/numpy/lib/function_base.py in _vectorize_call_with_signature(self, func, args) 2844 2845 for index in np.ndindex(broadcast_shape): -> 2846 results = func((arg[index] for arg in args)) 2847 2848 n_results = len(results) if isinstance(results, tuple) else 1

<ipython-input-14-078f1623bd91> in _func(data, bin_data, bins) 90 91 binned = da_data.groupby_bins(da_bin_data, bins, labels=labels, ---> 92 include_lowest=True).sum() 93 return binned 94

~/code/src/xarray/xarray/core/common.py in groupby_bins(self, group, bins, right, labels, precision, include_lowest, squeeze) 466 cut_kwargs={'right': right, 'labels': labels, 467 'precision': precision, --> 468 'include_lowest': include_lowest}) 469 470 def rolling(self, min_periods=None, center=False, **windows):

~/code/src/xarray/xarray/core/groupby.py in init(self, obj, group, squeeze, grouper, bins, cut_kwargs) 225 226 if bins is not None: --> 227 binned = pd.cut(group.values, bins, **cut_kwargs) 228 new_dim_name = group.name + '_bins' 229 group = DataArray(binned, group.coords, name=new_dim_name)

~/code/miniconda/envs/standard/lib/python3.6/site-packages/pandas/core/reshape/tile.py in cut(x, bins, right, labels, retbins, precision, include_lowest) 134 precision=precision, 135 include_lowest=include_lowest, --> 136 dtype=dtype) 137 138 return _postprocess_for_cut(fac, bins, retbins, x_is_series,

~/code/miniconda/envs/standard/lib/python3.6/site-packages/pandas/core/reshape/tile.py in _bins_to_cuts(x, bins, right, labels, precision, include_lowest, dtype, duplicates) 225 return result, bins 226 --> 227 unique_bins = algos.unique(bins) 228 if len(unique_bins) < len(bins) and len(bins) != 2: 229 if duplicates == 'raise':

~/code/miniconda/envs/standard/lib/python3.6/site-packages/pandas/core/algorithms.py in unique(values) 354 355 table = htable(len(values)) --> 356 uniques = table.unique(values) 357 uniques = _reconstruct_data(uniques, dtype, original) 358

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Float64HashTable.unique (pandas/_libs/hashtable.c:9781)()

~/code/miniconda/envs/standard/lib/python3.6/site-packages/pandas/_libs/hashtable.cpython-36m-x86_64-linux-gnu.so in View.MemoryView.memoryview_cwrapper (pandas/_libs/hashtable.c:45205)()

~/code/miniconda/envs/standard/lib/python3.6/site-packages/pandas/_libs/hashtable.cpython-36m-x86_64-linux-gnu.so in View.MemoryView.memoryview.cinit (pandas/_libs/hashtable.c:41440)()

ValueError: buffer source array is read-only

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Error when using .apply_ufunc with .groupby_bins 279909699
350081930 https://github.com/pydata/xarray/issues/1765#issuecomment-350081930 https://api.github.com/repos/pydata/xarray/issues/1765 MDEyOklzc3VlQ29tbWVudDM1MDA4MTkzMA== jbusecke 14314623 2017-12-07T20:16:29Z 2017-12-07T20:16:29Z CONTRIBUTOR

Ok I will give it a try soon, for now I shamelessly hardcoded it and it seems to work so far.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Error when using .apply_ufunc with .groupby_bins 279909699
349831900 https://github.com/pydata/xarray/issues/1765#issuecomment-349831900 https://api.github.com/repos/pydata/xarray/issues/1765 MDEyOklzc3VlQ29tbWVudDM0OTgzMTkwMA== jbusecke 14314623 2017-12-07T01:27:43Z 2017-12-07T01:27:43Z CONTRIBUTOR

Upon some more testing it seems the problem is related to how I pass the argument bins to apply_ufunc. python input_core_dims=[dims, dims, ['dummy']],

The bins are always going to be a 1D array that I want to pass identically for each call to _func. I thought by giving it a "dummy" core_dim I would achieve that. If I hardcode the bins into _func and remove the third input argument (and corresponding core_dim) from apply_ufunc the results are as expected.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Error when using .apply_ufunc with .groupby_bins 279909699

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 21.236ms · About: xarray-datasette