issues
3 rows where state = "closed", type = "issue" and user = 25071375 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2215890029 | I_kwDOAMm_X86EE8xt | 8894 | Rolling reduction with a custom function generates an excesive use of memory that kills the workers | josephnowak 25071375 | closed | 0 | 8 | 2024-03-29T19:15:28Z | 2024-04-01T20:57:59Z | 2024-03-30T01:49:17Z | CONTRIBUTOR | What happened?Hi, I have been trying to use a custom function on the rolling reduction method, the original function tries to filter the nan values (any numpy function that I have used that handles nans generates the same problem) to later apply some simple aggregate functions, but it is killing all my workers even when the data is very small (I have 7 workers and all of them have 3 Gb of RAM). What did you expect to happen?I would expect less use of memory taking into account the size of the rolling window, the simplicity of the function and the amount of data used on the example. Minimal Complete Verifiable Example```Python import numpy as np import dask.array as da import xarray as xr import dask def f(x, axis): # If I replace np.nansum by np.sum everything works perfectly and the amount of memory used is very small return np.nansum(x, axis=axis) arr = xr.DataArray( dask.array.zeros( shape=(300, 30000), dtype=float, chunks=(30, 6000) ), dims=["a", "b"], coords={"a": list(range(300)), "b": list(range(30000))} ) arr.rolling(a=252).reduce(f).chunk({"a": 252}).to_zarr("/data/test/test_write", mode="w") ``` MVCE confirmation
Relevant log output
Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 4.14.275-207.503.amzn2.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: None
xarray: 2024.1.0
pandas: 2.2.1
numpy: 1.26.3
scipy: 1.11.4
netCDF4: None
pydap: None
h5netcdf: None
h5py: 3.10.0
Nio: None
zarr: 2.16.1
cftime: None
nc_time_axis: None
iris: None
bottleneck: 1.3.7
dask: 2024.1.0
distributed: 2024.1.0
matplotlib: 3.8.2
cartopy: None
seaborn: 0.13.1
numbagg: 0.7.0
fsspec: 2023.12.2
cupy: None
pint: None
sparse: 0.15.1
flox: 0.8.9
numpy_groupies: 0.10.2
setuptools: 69.0.3
pip: 23.3.2
conda: 23.11.0
pytest: 7.4.4
mypy: None
IPython: 8.20.0
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
865206283 | MDU6SXNzdWU4NjUyMDYyODM= | 5210 | Probable error using zarr process synchronizer | josephnowak 25071375 | closed | 0 | 1 | 2021-04-22T17:05:10Z | 2023-10-14T20:36:18Z | 2023-10-14T20:36:18Z | CONTRIBUTOR | Hi I was trying to use Xarray open_zarr with the Zarr ProcessSynchronizer class and it produces a set of errors, I don't know if those errors are produced because I don't understand the logic of the ProcessSynchronizer or is a simple bug. I have a small code which reproduces the problems, basically, if I put a different path in the Zarr ProcessSynchronizer class all the error disappear but it creates a new folder. ```python import xarray import zarr import numpy as np arr = xarray.DataArray( data=np.array([ [1, 2, 7, 4, 5], [np.nan, 3, 5, 5, 6], [3, 3, np.nan, 5, 6], [np.nan, 3, 10, 5, 6], [np.nan, 7, 8, 5, 6], ], dtype=float), dims=['index', 'columns'], coords={'index': [0, 1, 2, 3, 4], 'columns': [0, 1, 2, 3, 4]}, ) If the synchronizer is created using another path the code will work without any error but it creates a new folder,that is the correct way to use the process synchronizer?synchronizer = zarr.ProcessSynchronizer('dummy_array.sync')Using the original path produce a set of weird problemssynchronizer = zarr.ProcessSynchronizer('dummy_array') Executing the commented code I obtain: PermissionError: [WinError 5].arr.to_dataset(name='data').to_zarr('dummy_array', mode='w', synchronizer=synchronizer, compute=True)arr.to_dataset(name='data').to_zarr('dummy_array', mode='w', compute=True) If this section of the code is uncommented It will throw a different error when xarray.open_zarr being executeda = zarr.open_array('dummy_array/data', synchronizer=synchronizer, mode='r')PermissionError: [Errno 13] Permission deniedxarray.open_zarr('dummy_array', synchronizer=synchronizer) ``` Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: es_ES.cp1252 libhdf5: 1.10.4 libnetcdf: None xarray: 0.17.0 pandas: 1.1.3 numpy: 1.19.2 scipy: 1.5.2 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.7.1 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.30.0 distributed: 2.30.1 matplotlib: 3.3.2 cartopy: None seaborn: 0.11.0 numbagg: None pint: None setuptools: 50.3.1.post20201107 pip: 21.0.1 conda: 4.10.0 pytest: 6.2.3 IPython: 7.19.0 sphinx: 3.2.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1088893989 | I_kwDOAMm_X85A5zQl | 6112 | Forward Fill not working when there are all-NaN chunks | josephnowak 25071375 | closed | 0 | 6 | 2021-12-27T01:27:05Z | 2022-01-03T17:55:59Z | 2022-01-03T17:55:59Z | CONTRIBUTOR | What happened: I'm working with a report dataset that only has data on some specific periods of time, the problem is that when I use the forward fill method it returns me many nans even on the last cells (it's a forward fill without limit). What you expected to happen: The array should not have nans in the last cells if it has data in any other cell or there should be a warning somewhere. Minimal Complete Verifiable Example: ```python import xarray as xr xr.DataArray( [1, 2, np.nan, np.nan, np.nan, np.nan], dims=['a'] ).chunk( 2 ).ffill( 'a' ).compute() ``` output: array([ 1., 2., 2., 2., nan, nan]) Anything else we need to know?: I check a little bit the internal code of Xarray for forward filling when it use dask and I think that the problem is that the algorithm makes an initial forward fill on all the blocks and then it makes a map_overlap for forward filling between chunks which in case that there is an empty chunk will not work due that it is going to take the last value of the empty chunk which is nan (hope this help). Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:25:18) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.4.0-1025-aws machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 0.20.2 pandas: 1.3.5 numpy: 1.21.4 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.10.3 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.12.0 distributed: 2021.12.0 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2021.11.1 cupy: None pint: None sparse: None setuptools: 59.4.0 pip: 21.3.1 conda: None pytest: 6.2.5 IPython: 7.30.1 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);