issue_comments
11 rows where issue = 158958801 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence · 11 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
248415522 | https://github.com/pydata/xarray/issues/873#issuecomment-248415522 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0ODQxNTUyMg== | shoyer 1217238 | 2016-09-20T19:55:55Z | 2016-09-20T19:55:55Z | MEMBER | Great, glad that worked! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
248409634 | https://github.com/pydata/xarray/issues/873#issuecomment-248409634 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0ODQwOTYzNA== | monocongo 1328158 | 2016-09-20T19:37:07Z | 2016-09-20T19:37:07Z | NONE | Thanks for this clarification, Stephan. Apparently I didn't read the API documentation closely enough, as I was assuming that the function is applied to the underlying ndarray rather than to all data variables of a Dataset object. Now that I've taken the approach you suggested I'm cooking with gas, and it's very encouraging. I really appreciate your help. --James On Tue, Sep 20, 2016 at 11:54 AM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
248345053 | https://github.com/pydata/xarray/issues/873#issuecomment-248345053 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0ODM0NTA1Mw== | shoyer 1217238 | 2016-09-20T15:54:02Z | 2016-09-20T15:54:02Z | MEMBER | GroupBy is working as intended here. You can certainly still use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
248216388 | https://github.com/pydata/xarray/issues/873#issuecomment-248216388 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0ODIxNjM4OA== | monocongo 1328158 | 2016-09-20T06:42:53Z | 2016-09-20T06:42:53Z | NONE | Thanks, Stephan. My code uses numpy.convolve() in several key places, so if that function is a deal breaker for using xarray then I'll hold off until that's fixed. In the meantime if there's anything else I can do to help you work this out then please let me know. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
242951236 | https://github.com/pydata/xarray/issues/873#issuecomment-242951236 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0Mjk1MTIzNg== | shoyer 1217238 | 2016-08-28T01:53:14Z | 2016-08-28T01:53:14Z | MEMBER | The first issue is an xarray bug. See #989 for a fix. The work around is not to provide an encoded dtype if the variable already has the dtype you want. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
242950771 | https://github.com/pydata/xarray/issues/873#issuecomment-242950771 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0Mjk1MDc3MQ== | shoyer 1217238 | 2016-08-28T01:37:44Z | 2016-08-28T01:37:44Z | MEMBER | The second issue is that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
242535724 | https://github.com/pydata/xarray/issues/873#issuecomment-242535724 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0MjUzNTcyNA== | monocongo 1328158 | 2016-08-25T20:48:45Z | 2016-08-25T20:48:45Z | NONE | Thanks, Stephan. In general things appear to be working much more as expected now, probably (hopefully) this is just an edge case/nuance that won't be too difficult for you guys to address. If so and if I don't run across any other issues then my code will be dramatically simplified by leveraging xarray rather than writing code to enable shared memory objects for the multiprocessing side of things (my assumption being that you guys have done a better job of that than I can). A gist with example code and a smallish data file attached to the comment is here: https://gist.github.com/monocongo/e8e883c2355f7a92bb0b9d24db5407a8 Please let me know if I can do anything else to help you help me. Godspeed! --James On Tue, Aug 23, 2016 at 12:42 AM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
241625354 | https://github.com/pydata/xarray/issues/873#issuecomment-241625354 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0MTYyNTM1NA== | shoyer 1217238 | 2016-08-23T04:42:41Z | 2016-08-23T04:42:41Z | MEMBER | Could you please share a data file and/or code which I can run to reproduce each of these issues? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
241540585 | https://github.com/pydata/xarray/issues/873#issuecomment-241540585 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDI0MTU0MDU4NQ== | monocongo 1328158 | 2016-08-22T20:32:20Z | 2016-08-22T20:32:20Z | NONE | I get the following error now when I try to run the gist code referenced in the original message above: ``` $ python -u xarray_gist.py /dev/shm/nclimgrid_prcp_reduced.nc nclimgrid_prcp_doubled.nc Traceback (most recent call last): File "xarray_gist.py", line 45, in <module> encoding = {variable_name: {'FillValue': np.nan, 'dtype': 'float32'}}) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/core/dataset.py", line 782, in to_netcdf engine=engine, encoding=encoding) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/backends/api.py", line 354, in to_netcdf dataset.dump_to_store(store, sync=sync, encoding=encoding) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/core/dataset.py", line 728, in dump_to_store store.store(variables, attrs, check_encoding) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/backends/common.py", line 234, in store check_encoding_set) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/backends/common.py", line 209, in store self.set_variables(variables, check_encoding_set) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/backends/common.py", line 219, in set_variables target, source = self.prepare_variable(name, v, check) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/backends/netCDF4.py", line 266, in prepare_variable raise_on_invalid=check_encoding) File "/home/james.adams/anaconda3/lib/python3.5/site-packages/xarray/backends/netCDF4_.py", line 167, in _extract_nc4_encoding ' %r' % (backend, invalid)) ValueError: unexpected encoding parameters for 'netCDF4' backend: ['dtype'] ``` Additionally I see the following errors when I run some other code which uses the same dataset.groupby().apply() technique (the trouble appears to show up within numpy.convolve()):
Please advise if I can provide any further information which might help work this out, or if I have made wrong assumptions as to how this feature should be used. Thanks. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
236092858 | https://github.com/pydata/xarray/issues/873#issuecomment-236092858 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDIzNjA5Mjg1OA== | shoyer 1217238 | 2016-07-29T04:38:36Z | 2016-07-29T04:38:36Z | MEMBER | Fixed by #867 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 | |
224484803 | https://github.com/pydata/xarray/issues/873#issuecomment-224484803 | https://api.github.com/repos/pydata/xarray/issues/873 | MDEyOklzc3VlQ29tbWVudDIyNDQ4NDgwMw== | shoyer 1217238 | 2016-06-08T04:34:46Z | 2016-06-08T04:34:46Z | MEMBER | Thanks for raising this one on GitHub (after I forgot to respond on the mailing list!). I have a partial fix for this in https://github.com/pydata/xarray/pull/867, but we clearly need some more tests to verify that groupby with a multi-index works properly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast error when dataset is recombined after a stack/groupby/apply/unstack sequence 158958801 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2