issue_comments
3 rows where issue = 499477368 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- assert_equal and dask · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
560115162 | https://github.com/pydata/xarray/issues/3350#issuecomment-560115162 | https://api.github.com/repos/pydata/xarray/issues/3350 | MDEyOklzc3VlQ29tbWVudDU2MDExNTE2Mg== | dcherian 2448579 | 2019-12-01T14:33:08Z | 2019-12-01T14:33:08Z | MEMBER |
I think the size 0 results from
We were specifying a name for the chunked array in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
assert_equal and dask 499477368 | |
535990462 | https://github.com/pydata/xarray/issues/3350#issuecomment-535990462 | https://api.github.com/repos/pydata/xarray/issues/3350 | MDEyOklzc3VlQ29tbWVudDUzNTk5MDQ2Mg== | shoyer 1217238 | 2019-09-27T15:35:55Z | 2019-09-27T15:35:55Z | MEMBER | Interestingly, it looks like the difference comes down to whether we chunk DataArrays or Datasets. The former produces graphs with fixed (reproducible) keys, the later doesn't: ``` In [57]: dict(ds.chunk().x.data.dask) Out[57]: {('xarray-x-a46bb46a12a44073da484c1311d00dec', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} In [58]: dict(ds.chunk().x.data.dask) Out[58]: {('xarray-x-a46bb46a12a44073da484c1311d00dec', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} In [59]: dict(ds.x.chunk().data.dask) Out[59]: {('xarray-<this-array>-d75d5cc0f0ce1b56590d80702339c0f0', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} In [60]: dict(ds.x.chunk().data.dask) Out[60]: {('xarray-<this-array>-0f78e51941cfb0e25d41ac24ef330a50', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} ``` But clearly this should work either way. The size zero dimension is a give-away that the problem has something to do with dask's |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
assert_equal and dask 499477368 | |
535987790 | https://github.com/pydata/xarray/issues/3350#issuecomment-535987790 | https://api.github.com/repos/pydata/xarray/issues/3350 | MDEyOklzc3VlQ29tbWVudDUzNTk4Nzc5MA== | shoyer 1217238 | 2019-09-27T15:28:41Z | 2019-09-27T15:28:41Z | MEMBER | Here's a slightly simpler case: ``` In [28]: ds = xr.Dataset({'x': (('y',), np.zeros(10))}) In [29]: (ds.chunk().isnull() & ds.chunk(5).isnull()).compute() ValueError: operands could not be broadcast together with shapes (0,) (5,) ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
assert_equal and dask 499477368 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2