home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 832111396

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1020#issuecomment-832111396 https://api.github.com/repos/pydata/xarray/issues/1020 832111396 MDEyOklzc3VlQ29tbWVudDgzMjExMTM5Ng== 27021858 2021-05-04T17:24:15Z 2021-05-04T17:24:15Z NONE

@shoyer I am having a similar problem. I am reading 80 files with total 8.3 GB . So each files has around 100 MB. If I understand you right: Using mf_dataset on such data is not recommend? So best practive wouold be to loop over the files ?

PS: I still tried to use some dask related operations but eachtime I try to access .values or use to_dataframe the memory usage explodes. Thanks a lot for answering ;)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  180080354
Powered by Datasette · Queries took 0.54ms · About: xarray-datasette