issues
1 row where repo = 13221727, state = "open" and user = 132147 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1596115847 | I_kwDOAMm_X85fIsuH | 7549 | HDF5-DIAG warnings calling `open_mfdataset` with more than `file_cache_maxsize` datasets (hdf5 1.12.2) | mx-moth 132147 | open | 0 | 10 | 2023-02-23T02:28:23Z | 2023-03-26T00:41:11Z | CONTRIBUTOR | What happened?Using What did you expect to happen?No warnings from HDF5-DIAG. Either raise an error because of the number of files being opened at once, or behaving as Minimal Complete Verifiable Example```Python import argparse import pathlib import tempfile from typing import List import netCDF4 import xarray HERE = pathlib.Path(file).parent def add_arguments(parser: argparse.ArgumentParser): parser.add_argument('count', type=int, default=200, nargs='?') parser.add_argument('--file-cache-maxsize', type=int, required=False) def main(): parser = argparse.ArgumentParser() add_arguments(parser) opts = parser.parse_args()
def make_many_datasets( work_dir: pathlib.Path, count: int = 200 ) -> List[pathlib.Path]: dataset_paths = [] for i in range(count): variable = f'var_{i}' path = work_dir / f'{variable}.nc' dataset_paths.append(path) make_dataset(path, variable)
def make_dataset( path: pathlib.Path, variable: str, ) -> None: ds = netCDF4.Dataset(path, "w", format="NETCDF4") ds.createDimension("x", 1) var = ds.createVariable(variable, "i8", ("x",)) var[:] = 1 ds.close() if name == 'main': main() ``` MVCE confirmation
Relevant log output
Anything else we need to know?The example is a script to run on the command line. Assuming the file is named ```shell This will show the error. Defaults to making 200 files$ python3 ./test.py This will not show the error - the number of files is less than file_cache_maxsize:$ python3 ./test.py 127 This will adjust file_cache_maxsize to show the error again, despite the lower number of files$ python3 ./test.py 11 --file-cache-maxsize=10 ``` The log output is from All output files are restricted to directories created in the current working directory named EnvironmentFailing environment:
INSTALLED VERSIONS
------------------
commit: None
python: 3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0]
python-bits: 64
OS: Linux
OS-release: 5.15.0-58-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_AU.UTF-8
LOCALE: ('en_AU', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.0
xarray: 2023.1.0
pandas: 1.5.3
numpy: 1.24.2
scipy: None
netCDF4: 1.6.2
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.6.2
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2023.2.0
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2023.1.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 44.0.0
pip: 23.0.1
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
Working environment:
INSTALLED VERSIONS
------------------
commit: None
python: 3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0]
python-bits: 64
OS: Linux
OS-release: 5.15.0-58-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_AU.UTF-8
LOCALE: ('en_AU', 'UTF-8')
libhdf5: 1.12.0
libnetcdf: 4.7.4
xarray: 2023.1.0
pandas: 1.5.3
numpy: 1.24.2
scipy: None
netCDF4: 1.5.8
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.6.2
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2023.2.0
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2023.1.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 44.0.0
pip: 23.0.1
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7549/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);