home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 661940009

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1086#issuecomment-661940009 https://api.github.com/repos/pydata/xarray/issues/1086 661940009 MDEyOklzc3VlQ29tbWVudDY2MTk0MDAwOQ== 25382032 2020-07-21T15:44:54Z 2020-07-21T15:46:06Z NONE

Hi,

``` import xarray as xr from pathlib import Path

dir_input = Path('.') data_ww3 = xr.open_mfdataset(dir_input.glob('*/' + 'WW3_EUR-11_CCCma-CanESM2_r1i1p1_CLMcom-CCLM4-8-17_v1_6hr_.nc'))

data_ww3 = data_ww3.isel(latitude=74, longitude=18) df_ww3 = data_ww3[['hs', 't02', 't0m1', 't01', 'fp', 'dir', 'spr', 'dp']].to_dataframe() ```

You can download one file here: https://nasgdfa.ugr.es:5001/d/f/566168344466602780 (3.5 GB). I did a profiler when opening 2 .nc files an it said the to_dataframe() call was the one taking most of the time.

I'm just wondering if there's a way to reduce computing time. I need to open 95 files and it takes about 1.5 hour.

Thanks,

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  187608079
Powered by Datasette · Queries took 0.63ms · About: xarray-datasette