issue_comments
16 rows where issue = 614144170 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Opendap access failure error · 16 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1065536538 | https://github.com/pydata/xarray/issues/4043#issuecomment-1065536538 | https://api.github.com/repos/pydata/xarray/issues/4043 | IC_kwDOAMm_X84_gswa | sgdecker 8419421 | 2022-03-11T21:16:59Z | 2022-03-11T21:16:59Z | NONE | I believe I am experiencing a similar issue, although with code that I thought was smart enough to chunk the data request into smaller pieces: ``` import numpy as np import xarray as xr from dask.diagnostics import ProgressBar import intake wrf_url = ('https://rda.ucar.edu/thredds/catalog/files/g/ds612.0/' 'PGW3D/2006/catalog.xml') catalog_u = intake.open_thredds_merged(wrf_url, path=['_U_2006060']) catalog_v = intake.open_thredds_merged(wrf_url, path=['_V_2006060']) ds_u = catalog_u.to_dask() ds_u['U'] = ds_u.U.chunk("auto") ds_v = catalog_v.to_dask() ds_v['V'] = ds_v.V.chunk("auto") ds = xr.merge((ds_u, ds_v)) def unstagger(ds, var, coord, new_coord): var1 = ds[var].isel({coord: slice(None, -1)}) var2 = ds[var].isel({coord: slice(1, None)}) return ((var1 + var2) / 2).rename({coord: new_coord}) with ProgressBar(): ds['U_unstaggered'] = unstagger(ds, 'U', 'west_east_stag', 'west_east') ds['V_unstaggered'] = unstagger(ds, 'V', 'south_north_stag', 'south_north') ds['speed'] = np.hypot(ds.U_unstaggered, ds.V_unstaggered) ds.speed.isel(bottom_top=10).sel(Time='2006-06-07T18:00').plot() ``` This throws an error because, according to the RDA help folks, a request for an entire variable is made, which far exceeds their server's 500 MB request limit:
Here's the error:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
657136785 | https://github.com/pydata/xarray/issues/4043#issuecomment-657136785 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDY1NzEzNjc4NQ== | dopplershift 221526 | 2020-07-11T22:01:55Z | 2020-07-11T22:01:55Z | CONTRIBUTOR | Probably worth raising upstream with the THREDDS team. I do wonder if there's some issues with the chunking/compression of the native .nc files that's at play here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
628484954 | https://github.com/pydata/xarray/issues/4043#issuecomment-628484954 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyODQ4NDk1NA== | aragong 48764870 | 2020-05-14T08:37:43Z | 2020-05-14T08:37:43Z | NONE | We tried several times with 2000MB this configuration in the thredds:
I tried with 50MB and the elapsed time was huge. Local Network - Elapsed time: 0.5819 minutes OpenDAP - Elapsed time: 37.1448 minutes |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
628016841 | https://github.com/pydata/xarray/issues/4043#issuecomment-628016841 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyODAxNjg0MQ== | rabernat 1197350 | 2020-05-13T14:13:06Z | 2020-05-13T14:13:06Z | MEMBER |
You might want to experiment with smaller chunks. In general, opendap will always introduce overhead compared to direct file access. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627882905 | https://github.com/pydata/xarray/issues/4043#issuecomment-627882905 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzg4MjkwNQ== | aragong 48764870 | 2020-05-13T10:01:08Z | 2020-05-13T10:01:08Z | NONE | I followed your recommendations @rabernat, please see my test code bellow. ```python import xarray as xr import os from datetime import datetime, timedelta import pandas as pd import shutil import numpy as np import time lonlat_box = [-4.5, -2.5, 44, 45] ERA5 IHdata - Local-------------------ds = xr.open_mfdataset(['raw/Wind_ERA5_Global_1998.05.nc', 'raw/Wind_ERA5_Global_1998.06.nc']) ds = ds.get('u') from 0º,360º to -180º,180ºds['lon'] = (ds.lon + 180) % 360 - 180 lat is upside down --> sort ascendingds = ds.sortby(['lon', 'lat']) Make the selectionds = ds.sel(lon=slice(lonlat_box[0], lonlat_box[1]), lat=slice(lonlat_box[2], lonlat_box[3])) print(ds) tic = time.perf_counter() df = ds.to_dataframe() toc = time.perf_counter() print(f"\nLocal Network - Elapsed time: {(toc - tic)/60:0.4f} minutes\n\n") del ds, df ERA5 IHdata - Opendap---------------------ds = xr.open_mfdataset(['http://193.144.213.180:8080/thredds/dodsC/Wind/Wind_ERA5/Global/Wind_ERA5_Global_1998.05.nc', 'http://193.144.213.180:8080/thredds/dodsC/Wind/Wind_ERA5/Global/Wind_ERA5_Global_1998.06.nc'], chunks={'time': '500MB'}) ds = ds.get('u') from 0º,360º to -180º,180ºds['lon'] = (ds.lon + 180) % 360 - 180 lat is upside down --> sort ascendingds = ds.sortby(['lon', 'lat']) Make the selectionds = ds.sel(lon=slice(lonlat_box[0], lonlat_box[1]), lat=slice(lonlat_box[2], lonlat_box[3])) print(ds) tic = time.perf_counter() df = ds.to_dataframe() toc = time.perf_counter() print(f"\n OpenDAP - Elapsed time: {(toc - tic)/60:0.4f} minutes\n\n") del ds, df
Local Network - Elapsed time: 0.4037 minutes <xarray.DataArray 'u' (lat: 5, lon: 9, time: 1464)> dask.array<getitem, shape=(5, 9, 1464), dtype=float32, chunksize=(5, 9, 120), chunktype=numpy.ndarray> Coordinates: * lon (lon) float32 -4.5 -4.25 -4.0 -3.75 -3.5 -3.25 -3.0 -2.75 -2.5 * lat (lat) float32 44.0 44.25 44.5 44.75 45.0 * time (time) datetime64[ns] 1998-05-01 ... 1998-06-30T23:00:00 Attributes: units: m s**-1 long_name: 10 metre U wind component OpenDAP - Elapsed time: 8.1971 minutes ``` Using this chunk of time=500Mb the code runs properly but it is really slow compared with the response through local network. I will try to raise this limit in the Opendap configuration with our IT-team to a more reasonable limit. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627387025 | https://github.com/pydata/xarray/issues/4043#issuecomment-627387025 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzM4NzAyNQ== | rabernat 1197350 | 2020-05-12T14:38:37Z | 2020-05-12T14:38:37Z | MEMBER |
This depends entirely on the TDS server configuration. See comment in https://github.com/Unidata/netcdf-c/issues/1667#issuecomment-597372065. The default limit appears to be 500 MB. It's important to note that none of this has to do with xarray. Xarray is simply the top layer of a very deep software stack. If the TDS server could deliver larger data requests, and the netCDF4-python library could accept them, xarray would have no problem. |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627375551 | https://github.com/pydata/xarray/issues/4043#issuecomment-627375551 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzM3NTU1MQ== | aragong 48764870 | 2020-05-12T14:19:24Z | 2020-05-12T14:19:24Z | NONE | @rabernat - Thank you! I will review the code (thank you for the extra comments, I really appreciate that) and follow your instructions to test the chunk size. Just for my understanding, So theoretically It is not possible to make big requests without using chunking? The threads server is under our management and we want to know if these errors can be solved through any specific configuration of the service in the thredds. Thank you in advance! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627368616 | https://github.com/pydata/xarray/issues/4043#issuecomment-627368616 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzM2ODYxNg== | rabernat 1197350 | 2020-05-12T14:07:39Z | 2020-05-12T14:07:39Z | MEMBER | I have spent plenty of time debugging these sorts of issues. It really helps to take xarray out of the equation. Try making your request with just the netCDF--that's all that xarray uses under the hood. Overall your example is very complicated, which makes it hard to find the core issue. You generally want to try something like this
A few additional comments about your code: ```python Select spatial subset [lon,lat]ds = ds.where((ds.lon >= Lon[0] - dl) & (ds.lon <= Lon[1] + dl) & (ds.lat >= Lat[0] - dl) & (ds.lat <= Lat[1] + dl), drop=True) ``` This is NOT how you do subsetting with xarray. Where is meant for masking. I recommend reviewing the xarray docs on indexing and selecting. Your call should be something like
What's the difference?
Can you do this sorting after loading the data. It's an expensive operation and might not interact well with the opendap server. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627363191 | https://github.com/pydata/xarray/issues/4043#issuecomment-627363191 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzM2MzE5MQ== | aragong 48764870 | 2020-05-12T13:58:26Z | 2020-05-12T13:58:26Z | NONE | thank you @dcherian, We know that if the request is small it works fine, but we want to make big requests of data. Is any limitation using opendap? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627357616 | https://github.com/pydata/xarray/issues/4043#issuecomment-627357616 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzM1NzYxNg== | dcherian 2448579 | 2020-05-12T13:48:49Z | 2020-05-12T13:48:49Z | MEMBER | I would check your server logs if you can. Or avoid xarray and try with lower level pydap / netCDF4. This may be useful: https://github.com/pangeo-data/pangeo/issues/767. Maybe you're requesting too much data? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627346640 | https://github.com/pydata/xarray/issues/4043#issuecomment-627346640 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzM0NjY0MA== | aragong 48764870 | 2020-05-12T13:30:39Z | 2020-05-12T13:30:39Z | NONE | Thank you @ocefpaf! But it raised the same error. I also try to load "u" variable with matlab ncread through opendap and also failed! So maybe is not a problem related with python...? I am very confused! ```Loading files: http://193.144.213.180:8080/thredds/dodsC/Wind/Wind_ERA5/Global/Wind_ERA5_Global_1998.05.nc http://193.144.213.180:8080/thredds/dodsC/Wind/Wind_ERA5/Global/Wind_ERA5_Global_1998.06.nc RuntimeError Traceback (most recent call last) d:\2020_REPSOL\Codigos_input_TESEO\user_script.py in 58 # ) 59 ---> 60 ERA5_windIHData2txt_TESEO(lonlat_box=[-4.5, -2.5, 44, 45], 61 date_ini=datetime(1998, 5, 28, 0), 62 date_end=datetime(1998, 6, 1, 12), d:\2020_REPSOL\Codigos_input_TESEO\TESEOtools_v0.py in ERA5_windIHData2txt_TESEO(failed resolving arguments) 826 827 # From xarray to dataframe --> 828 df = ds.to_dataframe().reset_index() 829 del ds 830 print('[Processing currents 2D...]') ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\dataset.py in to_dataframe(self) 4503 this dataset's indices. 4504 """ -> 4505 return self._to_dataframe(self.dims) 4506 4507 def _set_sparse_data_from_dataframe( ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\dataset.py in _to_dataframe(self, ordered_dims) 4489 def _to_dataframe(self, ordered_dims): 4490 columns = [k for k in self.variables if k not in self.dims] -> 4491 data = [ 4492 self._variables[k].set_dims(ordered_dims).values.reshape(-1) 4493 for k in columns ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\dataset.py in (.0) 4490 columns = [k for k in self.variables if k not in self.dims] 4491 data = [ -> 4492 self._variables[k].set_dims(ordered_dims).values.reshape(-1) 4493 for k in columns 4494 ] ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\variable.py in values(self) 444 def values(self): 445 """The variable's data as a numpy.ndarray""" --> 446 return _as_array_or_item(self._data) 447 448 @values.setter ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\variable.py in _as_array_or_item(data) 247 TODO: remove this (replace with np.asarray) once these issues are fixed 248 """ --> 249 data = np.asarray(data) 250 if data.ndim == 0: 251 if data.dtype.kind == "M": ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\numpy\core_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\array\core.py in array(self, dtype, kwargs) 1334 1335 def array(self, dtype=None, kwargs): -> 1336 x = self.compute() 1337 if dtype and x.dtype != dtype: 1338 x = x.astype(dtype) ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\base.py in compute(self, kwargs) 164 dask.base.compute 165 """ --> 166 (result,) = compute(self, traverse=False, kwargs) 167 return result 168 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\base.py in compute(args, kwargs) 442 postcomputes.append(x.dask_postcompute()) 443 --> 444 results = schedule(dsk, keys, kwargs) 445 return repack([f(r, a) for r, (f, a) in zip(results, postcomputes)]) 446 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\threaded.py in get(dsk, result, cache, num_workers, pool, **kwargs) 74 pools[thread][num_workers] = pool 75 ---> 76 results = get_async( 77 pool.apply_async, 78 len(pool._pool), ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\local.py in get_async(apply_async, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, **kwargs) 484 _execute_task(task, data) # Re-execute locally 485 else: --> 486 raise_exception(exc, tb) 487 res, worker_id = loads(res_info) 488 state["cache"][key] = res ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\local.py in reraise(exc, tb) 314 if exc.traceback is not tb: 315 raise exc.with_traceback(tb) --> 316 raise exc 317 318 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\local.py in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 220 try: 221 task, data = loads(task_info) --> 222 result = _execute_task(task, data) 223 id = get_id() 224 result = dumps((result, id)) ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in _execute_task(arg, cache, dsk) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in (.0) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in _execute_task(arg, cache, dsk) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in (.0) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in _execute_task(arg, cache, dsk) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in (.0) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\core.py in _execute_task(arg, cache, dsk) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\dask\array\core.py in getter(a, b, asarray, lock) 98 c = a[b] 99 if asarray: --> 100 c = np.asarray(c) 101 finally: 102 if lock: ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\numpy\core_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\indexing.py in array(self, dtype) 489 490 def array(self, dtype=None): --> 491 return np.asarray(self.array, dtype=dtype) 492 493 def getitem(self, key): ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\numpy\core_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\indexing.py in array(self, dtype) 651 652 def array(self, dtype=None): --> 653 return np.asarray(self.array, dtype=dtype) 654 655 def getitem(self, key): ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\numpy\core_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\indexing.py in array(self, dtype) 555 def array(self, dtype=None): 556 array = as_indexable(self.array) --> 557 return np.asarray(array[self.key], dtype=None) 558 559 def transpose(self, order): ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\numpy\core_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\coding\variables.py in array(self, dtype) 70 71 def array(self, dtype=None): ---> 72 return self.func(self.array) 73 74 def repr(self): ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\coding\variables.py in _scale_offset_decoding(data, scale_factor, add_offset, dtype) 216 217 def _scale_offset_decoding(data, scale_factor, add_offset, dtype): --> 218 data = np.array(data, dtype=dtype, copy=True) 219 if scale_factor is not None: 220 data *= scale_factor ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\coding\variables.py in array(self, dtype) 70 71 def array(self, dtype=None): ---> 72 return self.func(self.array) 73 74 def repr(self): ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\coding\variables.py in _apply_mask(data, encoded_fill_values, decoded_fill_value, dtype) 136 ) -> np.ndarray: 137 """Mask all matching values in a NumPy arrays.""" --> 138 data = np.asarray(data, dtype=dtype) 139 condition = False 140 for fv in encoded_fill_values: ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\numpy\core_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\indexing.py in array(self, dtype) 555 def array(self, dtype=None): 556 array = as_indexable(self.array) --> 557 return np.asarray(array[self.key], dtype=None) 558 559 def transpose(self, order): ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\backends\netCDF4_.py in getitem(self, key) 70 71 def getitem(self, key): ---> 72 return indexing.explicit_indexing_adapter( 73 key, self.shape, indexing.IndexingSupport.OUTER, self._getitem 74 ) ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\core\indexing.py in explicit_indexing_adapter(key, shape, indexing_support, raw_indexing_method) 835 """ 836 raw_key, numpy_indices = decompose_indexer(key, shape, indexing_support) --> 837 result = raw_indexing_method(raw_key.tuple) 838 if numpy_indices.tuple: 839 # index the loaded np.ndarray ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\backends\netCDF4_.py in _getitem(self, key) 83 with self.datastore.lock: 84 original_array = self.get_array(needs_lock=False) ---> 85 array = getitem(original_array, key) 86 except IndexError: 87 # Catch IndexError in netCDF4 and return a more informative ~\AppData\Local\Continuum\miniconda3\envs\TEST\lib\site-packages\xarray\backends\common.py in robust_getitem(array, key, catch, max_retries, initial_delay) 52 for n in range(max_retries + 1): 53 try: ---> 54 return array[key] 55 except catch: 56 if n == max_retries: netCDF4_netCDF4.pyx in netCDF4._netCDF4.Variable.getitem() netCDF4_netCDF4.pyx in netCDF4._netCDF4.Variable._get() netCDF4_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success() RuntimeError: NetCDF: Access failure``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
627326097 | https://github.com/pydata/xarray/issues/4043#issuecomment-627326097 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNzMyNjA5Nw== | ocefpaf 950575 | 2020-05-12T12:58:16Z | 2020-05-12T12:58:16Z | CONTRIBUTOR | I installed xarray through the recommended command in the official website in my minicoda env some months-year ago That is probably it then. I see you have
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
625675263 | https://github.com/pydata/xarray/issues/4043#issuecomment-625675263 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNTY3NTI2Mw== | aragong 48764870 | 2020-05-08T07:16:47Z | 2020-05-08T09:10:13Z | NONE | thank you @ocefpaf , I installed xarray through the recommended command in the official website in my minicoda env some months-year ago:
I list my versions below: ``` INSTALLED VERSIONS commit: None python: 3.6.7 (default, Feb 28 2019, 07:28:18) [MSC v.1900 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 42 Stepping 7, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.1
pandas: 0.24.2
numpy: 1.16.3
scipy: 1.2.1
netCDF4: 1.5.1.2
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.0.3.4
nc_time_axis: 1.2.0
PseudonetCDF: None
rasterio: None
cfgrib: 0.9.6.2
iris: None
bottleneck: None
dask: 1.1.5
distributed: 1.28.1
matplotlib: 3.0.3
cartopy: 0.16.0
seaborn: None
setuptools: 41.0.1
pip: 19.1.1
conda: 4.8.2
pytest: None
IPython: 7.5.0
sphinx: None
commit: None python: 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 42 Stepping 7, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.7.3 xarray: 0.15.1 pandas: 1.0.3 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.5.3 pydap: installed h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.1.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.15.0 distributed: 2.15.2 matplotlib: None cartopy: None seaborn: None numbagg: None setuptools: 46.1.3.post20200330 pip: 20.0.2 conda: None pytest: None IPython: 7.13.0 sphinx: None ``` Thank you in advance! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
625426383 | https://github.com/pydata/xarray/issues/4043#issuecomment-625426383 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNTQyNjM4Mw== | ocefpaf 950575 | 2020-05-07T18:35:20Z | 2020-05-07T18:35:20Z | CONTRIBUTOR | How are you installing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
625330036 | https://github.com/pydata/xarray/issues/4043#issuecomment-625330036 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNTMzMDAzNg== | aragong 48764870 | 2020-05-07T15:36:15Z | 2020-05-07T15:36:15Z | NONE | Totally agree, from my code the list of url are:
So I think the URL is properly constructed, indeed if I select only the longitude variable, which is quit small, I can perform the ds.to_dataframe() method... so I think url is fine! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 | |
625325400 | https://github.com/pydata/xarray/issues/4043#issuecomment-625325400 | https://api.github.com/repos/pydata/xarray/issues/4043 | MDEyOklzc3VlQ29tbWVudDYyNTMyNTQwMA== | dcherian 2448579 | 2020-05-07T15:28:21Z | 2020-05-07T15:28:21Z | MEMBER | It's unfortunate that we don't print filenames when access fails. Are you sure all the urls you construct are actually valid? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opendap access failure error 614144170 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 6