issue_comments
31 rows where author_association = "NONE" and user = 10137 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- ghost · 31 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1180507258 | https://github.com/pydata/xarray/issues/6766#issuecomment-1180507258 | https://api.github.com/repos/pydata/xarray/issues/6766 | IC_kwDOAMm_X85GXRx6 | ghost 10137 | 2022-07-11T14:49:09Z | 2022-07-11T14:49:09Z | NONE | okay thank you, started issue at: https://github.com/Unidata/netcdf-c/issues/2459 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581 | |
1180470141 | https://github.com/pydata/xarray/issues/6766#issuecomment-1180470141 | https://api.github.com/repos/pydata/xarray/issues/6766 | IC_kwDOAMm_X85GXIt9 | ghost 10137 | 2022-07-11T14:18:54Z | 2022-07-11T14:18:54Z | NONE | Or maybe I should add to this issue https://github.com/Unidata/netcdf4-python/issues/812 rather than starting a new one? Guidance welcome thanks. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581 | |
1180462733 | https://github.com/pydata/xarray/issues/6766#issuecomment-1180462733 | https://api.github.com/repos/pydata/xarray/issues/6766 | IC_kwDOAMm_X85GXG6N | ghost 10137 | 2022-07-11T14:12:54Z | 2022-07-11T14:12:54Z | NONE | Thanks for these suggestions, very helpful. See below, for details, but as far as I can tell it looks like: * my conda env ("EQ") has the same curl, libcurl, ca-certificates, and certifi as your system. * the ncdump commands gives same error (as netcdf4 and xarray). I should post an issue at netcdf4, correct? ``` (EQ) PS C:\Users\Codiga_D> conda list curl packages in environment at C:\Users\Codiga_D\AppData\Local\Continuum\miniconda3\envs\EQ:Name Version Build Channelcurl 7.83.1 h789b8ee_0 conda-forge libcurl 7.83.1 h789b8ee_0 conda-forge (EQ) PS C:\Users\Codiga_D> conda list certifi packages in environment at C:\Users\Codiga_D\AppData\Local\Continuum\miniconda3\envs\EQ:Name Version Build Channelca-certificates 2022.6.15 h5b45459_0 conda-forge certifi 2022.6.15 py37h03978a9_0 conda-forge (EQ) PS C:\Users\Codiga_D> ncdump -h http://psl.noaa.gov/thredds/dodsC/Datasets/NARR/monolevel/uwnd.10m.2000.nc Error:curl error: SSL connect error curl error details: Warning:oc_open: Could not read url C:\Users\Codiga_D\AppData\Local\Continuum\miniconda3\envs\EQ\Library\bin\ncdump.exe: http://psl.noaa.gov/thredds/dodsC/Datasets/NARR/monolevel/uwnd.10m.2000.nc: NetCDF: I/O failure ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581 | |
1179305426 | https://github.com/pydata/xarray/issues/6766#issuecomment-1179305426 | https://api.github.com/repos/pydata/xarray/issues/6766 | IC_kwDOAMm_X85GSsXS | ghost 10137 | 2022-07-08T19:36:39Z | 2022-07-08T19:36:39Z | NONE | Thanks for the quick response.
Result from Just a thought: I still wonder if this could be related to certification... which is something that did change on my system recently. I looked for information as to where netCDF4 would check for its certificate chain, but wasn't able to find something useful so far. INSTALLED VERSIONScommit: None python: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:37:49) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel byteorder: little LC_ALL: None LANG: en LOCALE: (None, None) libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.20.2 pandas: 1.3.5 numpy: 1.21.6 scipy: 1.7.3 netCDF4: 1.6.0 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.6.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.10 cfgrib: 0.9.10.1 iris: None bottleneck: 1.3.4 dask: None distributed: None matplotlib: 3.5.2 cartopy: 0.20.2 seaborn: 0.11.2 numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 59.8.0 pip: 22.1.2 conda: None pytest: None IPython: 7.33.0 sphinx: 4.3.2 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.open_dataset(url) gives NetCDF4 (lru_cache.py) error "oc_open: Could not read url" 1299316581 | |
879827498 | https://github.com/pydata/xarray/issues/3124#issuecomment-879827498 | https://api.github.com/repos/pydata/xarray/issues/3124 | MDEyOklzc3VlQ29tbWVudDg3OTgyNzQ5OA== | ghost 10137 | 2021-07-14T11:54:10Z | 2021-07-14T11:56:46Z | NONE | @dcherian, @spencerkclark , and @mada0007 Could u plz tell me that how to join that data after selecting Oct-March. Basically, I want to say whenever I am plotting a time series of this selected monthly data. My time time series is not continuous. Kindly plz let me know. I am attaching a plot for the reference. Example.pdf |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
selecting only october to march from monthly data using xarray 467814673 | |
863119738 | https://github.com/pydata/xarray/issues/5434#issuecomment-863119738 | https://api.github.com/repos/pydata/xarray/issues/5434 | MDEyOklzc3VlQ29tbWVudDg2MzExOTczOA== | ghost 10137 | 2021-06-17T10:20:46Z | 2021-06-17T10:26:12Z | NONE | Sorry for late response. I was trying to read a big geotif file as follows. import xarray as xr xds = xr.open_rasterio(geotif_file) My task was to array indexing and to save output into disk. columns = [8,9,7,100,1050,......, 9000] rows = [18,19,17,1100,1105,......, 9100] data = xds.isel(x=xr.DataArray(columns), y=xr.DataArray(rows)) np.save('output.npy', data) Unfortunately, the performance in terms of time requirement seems quite unsatisfactory. When I saw docs on I look forward to see it as |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.open_rasterio 910844095 | |
573192307 | https://github.com/pydata/xarray/issues/3684#issuecomment-573192307 | https://api.github.com/repos/pydata/xarray/issues/3684 | MDEyOklzc3VlQ29tbWVudDU3MzE5MjMwNw== | ghost 10137 | 2020-01-10T20:25:53Z | 2020-01-10T21:03:28Z | NONE | Each individual dataset opens successfully. ```python
Coordinates: * time (time) int32 4000 4001 4002 4003 4004 ... 4995 4996 4997 4998 4999 * sectors (sectors) object '40107_0_260000' '40107_1_320000' '40107_2_290000' * beams (beams) int32 0 1 2 3 4 5 6 7 8 ... 242 243 244 245 246 247 248 249
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset - different behavior with dask.distributed.LocalCluster 548263148 | |
501302890 | https://github.com/pydata/xarray/issues/3007#issuecomment-501302890 | https://api.github.com/repos/pydata/xarray/issues/3007 | MDEyOklzc3VlQ29tbWVudDUwMTMwMjg5MA== | ghost 10137 | 2019-06-12T14:36:44Z | 2019-06-12T14:36:44Z | NONE | I know what "NaN" means. I was hoping that by transforming the dataset into a dataframe and then returning back, the dataset variables would recover its original shape. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
NaN values for variables when converting from a pandas dataframe to xarray.DataSet 454073421 | |
445939304 | https://github.com/pydata/xarray/issues/2535#issuecomment-445939304 | https://api.github.com/repos/pydata/xarray/issues/2535 | MDEyOklzc3VlQ29tbWVudDQ0NTkzOTMwNA== | ghost 10137 | 2018-12-10T19:23:14Z | 2018-12-10T19:23:14Z | NONE | It seems that this is not a problem with xarray but only with rasterio and netCDF4. Also this fails: ```python import rasterio import netCDF4 with netCDF4.Dataset('test.nc', mode='w') as ds:
ds.createDimension('x')
ds.createVariable('foo', float, dimensions=('x'))
print(ds)
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF error when trying to write Dataset read with rasterio to NetCDF 376389539 | |
443840119 | https://github.com/pydata/xarray/issues/2535#issuecomment-443840119 | https://api.github.com/repos/pydata/xarray/issues/2535 | MDEyOklzc3VlQ29tbWVudDQ0Mzg0MDExOQ== | ghost 10137 | 2018-12-03T19:33:17Z | 2018-12-03T19:33:17Z | NONE | I have similar problem, when importing import xarray as xa import numpy as np #import netCDF4 import rasterio ds = xa.Dataset() ds['z'] = (('y', 'x'), np.zeros((100, 100), np.float32)) print(ds) ds.to_netcdf('test.nc') ds.close() with xa.open_dataset('test.nc') as ds: print(ds) If I import I installed everything with pip:
From affine==2.2.1 attrs==18.2.0 cftime==1.0.3 Click==7.0 click-plugins==1.0.4 cligj==0.5.0 Cython==0.29.1 netCDF4==1.4.2 numpy==1.15.4 pandas==0.23.4 pyparsing==2.3.0 python-dateutil==2.7.5 pytz==2018.7 rasterio==1.0.11 six==1.11.0 snuggs==1.4.2 xarray==0.11.0 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
HDF error when trying to write Dataset read with rasterio to NetCDF 376389539 | |
389622523 | https://github.com/pydata/xarray/issues/2139#issuecomment-389622523 | https://api.github.com/repos/pydata/xarray/issues/2139 | MDEyOklzc3VlQ29tbWVudDM4OTYyMjUyMw== | ghost 10137 | 2018-05-16T18:37:24Z | 2018-05-16T18:37:24Z | NONE | Does that sound like it will play well with GeoViews if I want widgets for the categorical vars? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
From pandas to xarray without blowing up memory 323703742 | |
389622155 | https://github.com/pydata/xarray/issues/2139#issuecomment-389622155 | https://api.github.com/repos/pydata/xarray/issues/2139 | MDEyOklzc3VlQ29tbWVudDM4OTYyMjE1NQ== | ghost 10137 | 2018-05-16T18:36:17Z | 2018-05-16T18:36:17Z | NONE | Ok. Looks like the way forward is a netCDF file for each level of my categorical variables. Will give it a shot. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
From pandas to xarray without blowing up memory 323703742 | |
389618279 | https://github.com/pydata/xarray/issues/2139#issuecomment-389618279 | https://api.github.com/repos/pydata/xarray/issues/2139 | MDEyOklzc3VlQ29tbWVudDM4OTYxODI3OQ== | ghost 10137 | 2018-05-16T18:24:02Z | 2018-05-16T18:24:02Z | NONE | @shoyer Thank you. Does metacsv look likely to work to you? It has attracted almost no attention so I wonder if it will exhaust memory. I'm kind of surprised this path (csv -> xarray) isn't better fleshed out as I would have expected it to be very common, perhaps the most common esp. for "found data." |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
From pandas to xarray without blowing up memory 323703742 | |
389596244 | https://github.com/pydata/xarray/issues/2139#issuecomment-389596244 | https://api.github.com/repos/pydata/xarray/issues/2139 | MDEyOklzc3VlQ29tbWVudDM4OTU5NjI0NA== | ghost 10137 | 2018-05-16T17:13:11Z | 2018-05-16T17:13:11Z | NONE | This looks potentially helpful http://metacsv.readthedocs.io/en/latest/ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
From pandas to xarray without blowing up memory 323703742 | |
389592602 | https://github.com/pydata/xarray/issues/2139#issuecomment-389592602 | https://api.github.com/repos/pydata/xarray/issues/2139 | MDEyOklzc3VlQ29tbWVudDM4OTU5MjYwMg== | ghost 10137 | 2018-05-16T17:01:37Z | 2018-05-16T17:01:37Z | NONE | PS: I started with Dask but haven't found a way to go from Dask to xarray. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
From pandas to xarray without blowing up memory 323703742 | |
389592243 | https://github.com/pydata/xarray/issues/2139#issuecomment-389592243 | https://api.github.com/repos/pydata/xarray/issues/2139 | MDEyOklzc3VlQ29tbWVudDM4OTU5MjI0Mw== | ghost 10137 | 2018-05-16T17:00:24Z | 2018-05-16T17:00:24Z | NONE | Hi @jhamman The original data is literally just a flat csv file with ie: lat,lon,epoch,cat1,cat2,var1,var2,...,var50 with 1 billion rows. I'm looking to xarray for GeoViews, which I think would benefit from having the data properly grouped/indexed by its categories |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
From pandas to xarray without blowing up memory 323703742 | |
364970290 | https://github.com/pydata/xarray/pull/1683#issuecomment-364970290 | https://api.github.com/repos/pydata/xarray/issues/1683 | MDEyOklzc3VlQ29tbWVudDM2NDk3MDI5MA== | ghost 10137 | 2018-02-12T16:06:44Z | 2018-02-12T16:06:44Z | NONE | Closing. Superseded by #1682. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add h5netcdf to the engine import hierarchy 270701183 | |
360970213 | https://github.com/pydata/xarray/issues/1860#issuecomment-360970213 | https://api.github.com/repos/pydata/xarray/issues/1860 | MDEyOklzc3VlQ29tbWVudDM2MDk3MDIxMw== | ghost 10137 | 2018-01-27T08:41:10Z | 2018-01-27T08:41:10Z | NONE | This was fixed through https://github.com/pydap/pydap/pull/159! Thank you. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
IndexError when accesing a data variable through a PydapDataStore 291926319 | |
360969685 | https://github.com/pydata/xarray/issues/1860#issuecomment-360969685 | https://api.github.com/repos/pydata/xarray/issues/1860 | MDEyOklzc3VlQ29tbWVudDM2MDk2OTY4NQ== | ghost 10137 | 2018-01-27T08:29:52Z | 2018-01-27T08:29:52Z | NONE | The method |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
IndexError when accesing a data variable through a PydapDataStore 291926319 | |
360807708 | https://github.com/pydata/xarray/issues/1860#issuecomment-360807708 | https://api.github.com/repos/pydata/xarray/issues/1860 | MDEyOklzc3VlQ29tbWVudDM2MDgwNzcwOA== | ghost 10137 | 2018-01-26T15:01:55Z | 2018-01-26T15:01:55Z | NONE | For some reason, the name of the variable at some point becomes 'tlml.tlml'. Method |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
IndexError when accesing a data variable through a PydapDataStore 291926319 | |
360782142 | https://github.com/pydata/xarray/issues/1857#issuecomment-360782142 | https://api.github.com/repos/pydata/xarray/issues/1857 | MDEyOklzc3VlQ29tbWVudDM2MDc4MjE0Mg== | ghost 10137 | 2018-01-26T13:18:10Z | 2018-01-26T13:18:10Z | NONE | Thanks for the suggestion! Installing both latest master of xarray (0092911) and latest master of pydap (4ae73e3) fixed this issue, and now I can open the dataset. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
AttributeError: '<class 'pydap.model.GridType'>' object has no attribute 'shape' 291524555 | |
360468449 | https://github.com/pydata/xarray/issues/1857#issuecomment-360468449 | https://api.github.com/repos/pydata/xarray/issues/1857 | MDEyOklzc3VlQ29tbWVudDM2MDQ2ODQ0OQ== | ghost 10137 | 2018-01-25T13:37:37Z | 2018-01-25T13:37:37Z | NONE | After pulling (Git says ‘Already up-to-date.’), my xarray version ( |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
AttributeError: '<class 'pydap.model.GridType'>' object has no attribute 'shape' 291524555 | |
360461190 | https://github.com/pydata/xarray/issues/1857#issuecomment-360461190 | https://api.github.com/repos/pydata/xarray/issues/1857 | MDEyOklzc3VlQ29tbWVudDM2MDQ2MTE5MA== | ghost 10137 | 2018-01-25T13:06:15Z | 2018-01-25T13:06:15Z | NONE | Same thing: ``` Traceback (most recent call last): File "C:\Anaconda3\envs\xa_test\lib\site-packages\pydap\model.py", line 295, in getattr return self[attr] File "C:\Anaconda3\envs\xa_test\lib\site-packages\pydap\model.py", line 556, in getitem return StructureType.getitem(self, key) File "C:\Anaconda3\envs\xa_test\lib\site-packages\pydap\model.py", line 326, in getitem return self._dict[key] KeyError: 'shape' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Anaconda3\envs\xa_test\lib\site-packages\pydap\model.py", line 180, in getattr return self.attributes[attr] KeyError: 'shape' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\src\xarray\xarray\backends\api.py", line 305, in open_dataset return maybe_decode_store(store, lock) File "c:\src\xarray\xarray\backends\api.py", line 225, in maybe_decode_store drop_variables=drop_variables) File "c:\src\xarray\xarray\conventions.py", line 598, in decode_cf vars, attrs = obj.load() File "c:\src\xarray\xarray\backends\common.py", line 133, in load for k, v in self.get_variables().items()) File "c:\src\xarray\xarray\backends\pydap_.py", line 85, in get_variables for k in self.ds.keys()) File "c:\src\xarray\xarray\core\utils.py", line 309, in FrozenOrderedDict return Frozen(OrderedDict(args, *kwargs)) File "c:\src\xarray\xarray\backends\pydap_.py", line 85, in <genexpr> for k in self.ds.keys()) File "c:\src\xarray\xarray\backends\pydap_.py", line 79, in open_store_variable data = indexing.LazilyIndexedArray(PydapArrayWrapper(var)) File "c:\src\xarray\xarray\core\indexing.py", line 482, in init key = BasicIndexer((slice(None),) * array.ndim) File "c:\src\xarray\xarray\core\utils.py", line 428, in ndim return len(self.shape) File "c:\src\xarray\xarray\backends\pydap_.py", line 20, in shape return self.array.shape File "C:\Anaconda3\envs\xa_test\lib\site-packages\pydap\model.py", line 297, in getattr return DapType.getattr(self, attr) File "C:\Anaconda3\envs\xa_test\lib\site-packages\pydap\model.py", line 184, in getattr % (self.class, attr)) AttributeError: '<class 'pydap.model.GridType'>' object has no attribute 'shape' ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
AttributeError: '<class 'pydap.model.GridType'>' object has no attribute 'shape' 291524555 | |
355678395 | https://github.com/pydata/xarray/pull/1682#issuecomment-355678395 | https://api.github.com/repos/pydata/xarray/issues/1682 | MDEyOklzc3VlQ29tbWVudDM1NTY3ODM5NQ== | ghost 10137 | 2018-01-05T22:07:03Z | 2018-01-05T22:07:03Z | NONE | Now that the tests are passing again, is there anything else left to change? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add option “engine” 270677100 | |
351810655 | https://github.com/pydata/xarray/pull/1682#issuecomment-351810655 | https://api.github.com/repos/pydata/xarray/issues/1682 | MDEyOklzc3VlQ29tbWVudDM1MTgxMDY1NQ== | ghost 10137 | 2017-12-14T19:25:03Z | 2017-12-14T19:25:03Z | NONE | I've refactored setting the I/O engine option as per our discussion. Hopefully, it captures now all the requested functionality. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add option “engine” 270677100 | |
347917241 | https://github.com/pydata/xarray/pull/1682#issuecomment-347917241 | https://api.github.com/repos/pydata/xarray/issues/1682 | MDEyOklzc3VlQ29tbWVudDM0NzkxNzI0MQ== | ghost 10137 | 2017-11-29T16:32:56Z | 2017-11-29T16:32:56Z | NONE | Let's see if we can get this PR over the line... 😄 A list of engines would need some way of declaring their I/O capabilities: only file-based, only HTTP-based, or both. Something like: ```python io_engines = [ {'engine': 'netcdf4', 'capabilities': ['file', 'http']},
] ``` On xarray import or any time this option would change, the list of engines would be checked to remove unavailable engines. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add option “engine” 270677100 | |
341773389 | https://github.com/pydata/xarray/pull/1682#issuecomment-341773389 | https://api.github.com/repos/pydata/xarray/issues/1682 | MDEyOklzc3VlQ29tbWVudDM0MTc3MzM4OQ== | ghost 10137 | 2017-11-03T17:30:18Z | 2017-11-03T17:30:18Z | NONE | Yes, there could be more I/O engine options. How about On the other hand, setting this global option should indicate a willingness to accept the consequences. If automatic selection of the optional I/O engine is preferred, this global option should not be set. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add option “engine” 270677100 | |
341610428 | https://github.com/pydata/xarray/pull/1682#issuecomment-341610428 | https://api.github.com/repos/pydata/xarray/issues/1682 | MDEyOklzc3VlQ29tbWVudDM0MTYxMDQyOA== | ghost 10137 | 2017-11-03T02:35:14Z | 2017-11-03T02:35:14Z | NONE | How about I have reverted to the original |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add option “engine” 270677100 | |
317472105 | https://github.com/pydata/xarray/issues/1484#issuecomment-317472105 | https://api.github.com/repos/pydata/xarray/issues/1484 | MDEyOklzc3VlQ29tbWVudDMxNzQ3MjEwNQ== | ghost 10137 | 2017-07-24T16:08:30Z | 2017-07-24T16:08:30Z | NONE | Just saw xr.DataArray.dot(). PROBLEM SOLVED. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Matrix cross product in xarray 244702576 | |
317470340 | https://github.com/pydata/xarray/issues/1484#issuecomment-317470340 | https://api.github.com/repos/pydata/xarray/issues/1484 | MDEyOklzc3VlQ29tbWVudDMxNzQ3MDM0MA== | ghost 10137 | 2017-07-24T16:02:40Z | 2017-07-24T16:02:40Z | NONE | How do I make dot product (np.dot or pandas.dataframe.dot) between two DataArrays? X has dimensions [dim_0, dim_1, dim_2], Y has dimensions [dim0, dim3]. result should have dimensions [dim_1, dim2, dim3]. result = np.dot(X,Y) OR, result = pd.DataFrame.dot(X,Y) In both cases, error "shapes are not aligned" occurred. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Matrix cross product in xarray 244702576 | |
116411269 | https://github.com/pydata/xarray/issues/448#issuecomment-116411269 | https://api.github.com/repos/pydata/xarray/issues/448 | MDEyOklzc3VlQ29tbWVudDExNjQxMTI2OQ== | ghost 10137 | 2015-06-29T03:22:52Z | 2015-06-29T03:22:52Z | NONE | I agree that it's the point with np.asarray, but given the implementation you'd think np.asanyarray would work. My initial takeaway (until examining the source) was that this was an ndarray with additional attributes and properties. Perhaps, I'm leaning too far towards numpy and too far away from pandas. As background: my usage involves RF pattern data which typically involves a lot of independent variables to lug around as well as the measured data. I'll look into your other suggestions. Thank you for your reply. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
asarray Compatibility 91676831 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 13