home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 361532119

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1836#issuecomment-361532119 https://api.github.com/repos/pydata/xarray/issues/1836 361532119 MDEyOklzc3VlQ29tbWVudDM2MTUzMjExOQ== 102827 2018-01-30T09:32:26Z 2018-01-30T09:32:26Z CONTRIBUTOR

Thanks @jhamman for looking into this.

Currently I am fine with using persist() since I can break down my analysis workflow to certain time periods for which data fits into RAM on a large machine. As I have written, the distributed scheduler failed for me because of #1464. But I would like to use it in the future. From other discussions on the dask schedulers (here or on SO) using the distributed scheduler seems to be a general recommendation anyway.

In summary, I am fine with my current workaround. I do not think that solving this issue has a high priority, in particular when the distributed scheduler is further improved. The main annoyance was to track down the problem described in my first post. Hence, maybe the limitations of the schedulers could be described a bit better in the documentation. Would you want a PR on this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  289342234
Powered by Datasette · Queries took 0.618ms · About: xarray-datasette