issue_comments: 535990462
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/3350#issuecomment-535990462 | https://api.github.com/repos/pydata/xarray/issues/3350 | 535990462 | MDEyOklzc3VlQ29tbWVudDUzNTk5MDQ2Mg== | 1217238 | 2019-09-27T15:35:55Z | 2019-09-27T15:35:55Z | MEMBER | Interestingly, it looks like the difference comes down to whether we chunk DataArrays or Datasets. The former produces graphs with fixed (reproducible) keys, the later doesn't: ``` In [57]: dict(ds.chunk().x.data.dask) Out[57]: {('xarray-x-a46bb46a12a44073da484c1311d00dec', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} In [58]: dict(ds.chunk().x.data.dask) Out[58]: {('xarray-x-a46bb46a12a44073da484c1311d00dec', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} In [59]: dict(ds.x.chunk().data.dask) Out[59]: {('xarray-<this-array>-d75d5cc0f0ce1b56590d80702339c0f0', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} In [60]: dict(ds.x.chunk().data.dask) Out[60]: {('xarray-<this-array>-0f78e51941cfb0e25d41ac24ef330a50', 0): array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])} ``` But clearly this should work either way. The size zero dimension is a give-away that the problem has something to do with dask's |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
499477368 |