issue_comments
3 rows where issue = 1596115847 and user = 5821660 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- HDF5-DIAG warnings calling `open_mfdataset` with more than `file_cache_maxsize` datasets (hdf5 1.12.2) · 3 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1449702243 | https://github.com/pydata/xarray/issues/7549#issuecomment-1449702243 | https://api.github.com/repos/pydata/xarray/issues/7549 | IC_kwDOAMm_X85WaLNj | kmuehlbauer 5821660 | 2023-03-01T09:37:43Z | 2023-03-01T09:37:43Z | MEMBER | This as far I can get for the moment. @mx-moth I'd suggest to go upstream (netCDF4/netcdf-c) with details about this issue. At least we can rule out an issue only related to Maybe @DennisHeimbigner can shed more light here?
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
HDF5-DIAG warnings calling `open_mfdataset` with more than `file_cache_maxsize` datasets (hdf5 1.12.2) 1596115847 | |
| 1449662753 | https://github.com/pydata/xarray/issues/7549#issuecomment-1449662753 | https://api.github.com/repos/pydata/xarray/issues/7549 | IC_kwDOAMm_X85WaBkh | kmuehlbauer 5821660 | 2023-03-01T09:19:26Z | 2023-03-01T09:31:35Z | MEMBER | I just tested this with netcdf-c 4.9.1 but still these errors show up, also using conda-forge only install. To make this even weirder I've checked creation/reading with only hdf5/h5py/h5netcdf in the environment. Seems everything is working well. ```python import argparse import pathlib import tempfile from typing import List import h5netcdf.legacyapi as nc import xarray HERE = pathlib.Path(file).parent def add_arguments(parser: argparse.ArgumentParser): parser.add_argument('count', type=int, default=200, nargs='?') parser.add_argument('--file-cache-maxsize', type=int, required=False) def main(): parser = argparse.ArgumentParser() add_arguments(parser) opts = parser.parse_args()
def make_many_datasets( work_dir: pathlib.Path, count: int = 200 ) -> List[pathlib.Path]: dataset_paths = [] for i in range(count): variable = f'var_{i}' path = work_dir / f'{variable}.nc' dataset_paths.append(path) make_dataset(path, variable)
def make_dataset( path: pathlib.Path, variable: str, ) -> None: ds = nc.Dataset(path, "w") ds.createDimension("x", 1) var = ds.createVariable(variable, "i8", ("x",)) var[:] = 1 ds.close() if name == 'main': main() ``` ``` This will show no error. Defaults to making 200 files$ python3 ./test.py This will also not show the error - the number of files is less than file_cache_maxsize:$ python3 ./test.py 127 This will adjust file_cache_maxsize to show the error again, despite the lower number of files, here we see another error issued by h5py$ python3 ./test.py 11 --file-cache-maxsize=10 ```
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03)
[GCC 11.3.0]
python-bits: 64
OS: Linux
OS-release: 5.14.21-150400.24.46-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: de_DE.UTF-8
LOCALE: ('de_DE', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: None
xarray: 2023.2.0
pandas: 1.5.3
numpy: 1.24.2
scipy: None
netCDF4: None
pydap: None
h5netcdf: 1.1.0
h5py: 3.7.0
Nio: None
zarr: None
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2023.2.1
distributed: 2023.2.1
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2023.1.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 67.4.0
pip: 23.0.1
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
Update: added |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
HDF5-DIAG warnings calling `open_mfdataset` with more than `file_cache_maxsize` datasets (hdf5 1.12.2) 1596115847 | |
| 1449454232 | https://github.com/pydata/xarray/issues/7549#issuecomment-1449454232 | https://api.github.com/repos/pydata/xarray/issues/7549 | IC_kwDOAMm_X85WZOqY | kmuehlbauer 5821660 | 2023-03-01T07:01:40Z | 2023-03-01T07:01:40Z | MEMBER | @dcherian Thanks for the ping. I can reproduce in a fresh conda-forge env with pip installed netcdf4, xarray and dask. @mx-moth A search brought up this likely related issue over at netcdf-c, https://github.com/Unidata/netcdf-c/issues/2458. The according PR with a fix https://github.com/Unidata/netcdf-c/pull/2461 is milestoned for netcdf-c 4.9.1. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
HDF5-DIAG warnings calling `open_mfdataset` with more than `file_cache_maxsize` datasets (hdf5 1.12.2) 1596115847 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1