issue_comments: 382071801
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1983#issuecomment-382071801 | https://api.github.com/repos/pydata/xarray/issues/1983 | 382071801 | MDEyOklzc3VlQ29tbWVudDM4MjA3MTgwMQ== | 1117224 | 2018-04-17T17:14:33Z | 2018-04-17T17:38:42Z | NONE | Thanks @jhamman for working on this! I did a test on my real world data (1202 ~3mb files) on my local computer and am not getting results I expected: 1) No speed up with parallel=True 2) Slow down when using distributed (processes=16 cores=16). Am I missing something? ```python nc_files = glob.glob(E.obs['NSIDC_0081']['sipn_nc']+'/*.nc') print(len(nc_files)) 1202 Parallel False%time ds = xr.open_mfdataset(nc_files, concat_dim='time', parallel=False, autoclose=True) CPU times: user 57.8 s, sys: 3.2 s, total: 1min 1s Wall time: 1min Parallel True with default scheduler%time ds = xr.open_mfdataset(nc_files, concat_dim='time', parallel=True, autoclose=True) CPU times: user 1min 16s, sys: 9.82 s, total: 1min 26s Wall time: 1min 16s Parallel True with distributedfrom dask.distributed import Client client = Client() print(client) <Client: scheduler='tcp://127.0.0.1:43291' processes=16 cores=16> %time ds = xr.open_mfdataset(nc_files, concat_dim='time', parallel=True, autoclose=True) CPU times: user 2min 17s, sys: 12.3 s, total: 2min 29s Wall time: 3min 48s ``` On feature/parallel_open_netcdf commit 280a46f13426a462fb3e983cfd5ac7a0565d1826 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
304589831 |