home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 295993132

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1379#issuecomment-295993132 https://api.github.com/repos/pydata/xarray/issues/1379 295993132 MDEyOklzc3VlQ29tbWVudDI5NTk5MzEzMg== 7799184 2017-04-21T00:54:28Z 2017-04-21T10:05:27Z CONTRIBUTOR

I realised that some of the Datasets I was trying to concatenate had different coordinate values (for coordinates that I was assuming to be the same) so I guess xr.concat was trying to align these coordinates before concatenating and the resultant Dataset ended up being much larger than it should have been. When I ensure I only concatenate Datasets with consistent coordinates, I can do it.

However still resource consumption is quite high compared to when I so the same thing with numpy arrays. The memory increased by 42% using xr.concat (against 6% using np.concatenate) and the whole processing took about 4 times longer.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  223231729
Powered by Datasette · Queries took 0.662ms · About: xarray-datasette