id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 2129180716,PR_kwDOAMm_X85mld8X,8736,Make list_chunkmanagers more resilient to broken entrypoints,90008,closed,0,,,6,2024-02-11T21:37:38Z,2024-03-13T17:54:02Z,2024-03-13T17:54:02Z,CONTRIBUTOR,,0,pydata/xarray/pulls/8736,"As I'm a developing my custom chunk manager, I'm often checking out between my development branch and production branch breaking the entrypoint. This made xarray impossible to import unless I re-ran `pip install -e . -vv` which is somewhat tiring. This should help xarray be more resilient in other software's bugs in case they install malformed entrypoints Example: ```python >>> from xarray.core.parallelcompat import list_chunkmanagers >>> list_chunkmanagers() :1: UserWarning: Failed to load entrypoint MyChunkManager due to No module named 'my.array._chunkmanager'. Skipping. list_chunkmanagers() {'dask': } ``` Thank you for considering. - [x] Closes #xxxx - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst` This is mostly a quality of life thing for developers, I don't see this as a user visible change.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8736/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2131345470,PR_kwDOAMm_X85ms1Q6,8738,Don't break users that were already using ChunkManagerEntrypoint,90008,closed,0,,,1,2024-02-13T02:17:55Z,2024-02-13T15:37:54Z,2024-02-13T03:21:32Z,CONTRIBUTOR,,0,pydata/xarray/pulls/8738,"For example, you just broke cubed https://github.com/xarray-contrib/cubed-xarray/blob/main/cubed_xarray/cubedmanager.py#L15 Not sure how much you care, it didn't seem like anybody other than me ever tried this module on github... - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8738/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2034395026,PR_kwDOAMm_X85hnUnc,8534,Point users to where in their code they should make mods for Dataset.dims,90008,closed,0,,,8,2023-12-10T14:31:29Z,2023-12-10T18:50:10Z,2023-12-10T18:23:42Z,CONTRIBUTOR,,0,pydata/xarray/pulls/8534,"Its somewhat annoying to get warnings that point to a line within a library where the warning is issued. It really makes it unclear what one needs to change. This points to the user's access of the `dims` attribute. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8534/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1429172192,I_kwDOAMm_X85VL2_g,7239,include/exclude lists in Dataset.expand_dims,90008,closed,0,,,6,2022-10-31T03:01:52Z,2023-11-05T06:29:06Z,2023-11-05T06:29:06Z,CONTRIBUTOR,,,,"### Is your feature request related to a problem? I would like to be able to expand the dimensions of a dataset, but most of the time, I only want to expand the datasets of a few key variables. It would be nice if there were some kind of filter mechanism. ### Describe the solution you'd like ```python import xarray as xr dataset = xr.Dataset(data_vars={'foo': 1, 'bar': 2}) dataset.expand_dims(""zar"", include_variables=[""foo""]) # Only foo is expanded, bar is left alone. ``` ### Describe alternatives you've considered Writing my own function. I'll probably do this. Subclassing. Too confusing and easy to ""diverge"" from you all when you do decide to implment this. ### Additional context For large datasets, you likely just want some key parameters expanded, and not all parameters expanded. xarray version: 2022.10.0","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7239/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1731320789,PR_kwDOAMm_X85Rougi,7883,Avoid one call to len when getting ndim of Variables,90008,closed,0,,,3,2023-05-29T23:37:10Z,2023-07-03T15:44:32Z,2023-07-03T15:44:31Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7883,"I admit this is a super micro optimization but it avoids in certain cases the creation of a tuple, and a call to len on it. I hit this as I was trying to understand why Variable indexing was so much slower than numpy indexing. It seems that bounds checking in python is just slower than in C. Feel free to close this one if you don't want this kind of optimization. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7883/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1428549868,I_kwDOAMm_X85VJfDs,7237,The new NON_NANOSECOND_WARNING is not very nice to end users,90008,closed,0,,,5,2022-10-30T01:56:59Z,2023-05-09T12:52:54Z,2022-11-04T20:13:20Z,CONTRIBUTOR,,,,"### What is your issue? The new nanosecond warning doesn't really point anybody to where they should change their code. Nor does it really tell them how to fix it. ``` import xarray as xr import numpy as np xr.DataArray(np.zeros(1, dtype='datetime64[us]')) ``` yields ``` xarray/core/variable.py:194: UserWarning: Converting non-nanosecond precision datetime values to nanosecond precision. This behavior can eventually be relaxed in xarray, as it is an artifact from pandas which is now beginning to support non-nanosecond precision values. ``` https://github.com/pydata/xarray/blob/f32d354e295c05fb5c5ece7862f77f19d82d5894/xarray/core/variable.py#L194 I think at the very least, the stacklevel should be specified when calling the `warn` function. It isn't really pretty, but I've been passing a parameter when I expect to pass up a warning to the end user: eg. https://github.com/vispy/vispy/pull/2405 However, others have not liked that approach.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7237/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1306457778,I_kwDOAMm_X85N3vay,6791,get_data or get_varibale method,90008,closed,0,,,3,2022-07-15T20:24:31Z,2023-04-29T03:40:01Z,2023-04-29T03:40:01Z,CONTRIBUTOR,,,,"### Is your feature request related to a problem? I often store a few scalars or arrays in xarray containers. However, when I want to optionally address their data the code I have to run ```python import xarray as xr dataset = xr.Dataset() my_variable = dataset.get('my_variable', None) if my_variable is not None: my_variable = my_variable.data else: my_variable = np.asarray(1.0) # the default value I actually want ``` ### Describe the solution you'd like ```python import xarray as xr dataset = xr.Dataset() my_variable = dataset.get_data('my_variable', np.asarray(1.0)) ``` ### Describe alternatives you've considered _No response_ ### Additional context Thank you!","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6791/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1675299031,I_kwDOAMm_X85j2wjX,7770,Provide a public API for adding new backends,90008,closed,0,,,3,2023-04-19T17:06:24Z,2023-04-20T00:15:23Z,2023-04-20T00:15:23Z,CONTRIBUTOR,,,,"### Is your feature request related to a problem? I understand that this is a double edge sword. but we were relying on `BACKEND_ENTRYPOINTS` being a dictionary to a class and that broke in https://github.com/pydata/xarray/pull/7523 ### Describe the solution you'd like Some agreed upon way that we could create a new backend. This would allow users to provide more custom parameters to file creation attributes and other options that are currently not exposed via xarray. I've used this to overwrite some parameters like netcdf global variables. I've also used this to add `alignment_threshold` and `alignment_interval` to h5netcdf. I did it through a custom backend because it felt like a contentious feature at the time. (I really do think it helps performance). ### Describe alternatives you've considered A deprecation cycle in the future??? Maybe this could have been acheived with the definition of `RELOADABLE_BACKEND_ENTRYPOINTS` and leaving the `BACKEND_ENTRYPOINTS` unchanged in signature. ### Additional context We used this to define the alignment within a file. netcdf4 exposed this as a global variable so we have to somewhat hack around it just before creation time. I mean, you can probably say: ""Doing this is too complicated, we don't want to give any guarantees on this front."" I would agree with you.....","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7770/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 690546795,MDExOlB1bGxSZXF1ZXN0NDc3NDIwMTkz,4400,[WIP] Support nano second time encoding.,90008,closed,0,,,10,2020-09-02T00:16:04Z,2023-03-26T20:59:00Z,2023-03-26T20:08:50Z,CONTRIBUTOR,,0,pydata/xarray/pulls/4400," Not too sure i have the bandwidth to complete this seeing as cftime and datetime don't have nanoseconds, but maybe it can help somebody. - [x] Closes #4183 - [x] Tests added - [ ] Passes `isort . && black . && mypy . && flake8` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4400/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1475567394,PR_kwDOAMm_X85ESe3u,7356,Avoid loading entire dataset by getting the nbytes in an array,90008,closed,0,,,14,2022-12-05T03:29:53Z,2023-03-17T17:31:22Z,2022-12-12T16:46:40Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7356,"Using `.data` accidentally tries to load the whole lazy arrays into memory. Sad. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7356/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 689502005,MDExOlB1bGxSZXF1ZXN0NDc2NTM3Mzk3,4395,WIP: Ensure that zarr.ZipStores are closed,90008,closed,0,,,4,2020-08-31T20:57:49Z,2023-01-31T21:39:15Z,2023-01-31T21:38:23Z,CONTRIBUTOR,,0,pydata/xarray/pulls/4395,"ZipStores aren't always closed making it hard to use them as fluidly as regular zarr stores. - [ ] Closes #xxxx - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` # master doesn't pass black - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4395/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1468595351,PR_kwDOAMm_X85D6oci,7334,Remove code used to support h5py<2.10.0,90008,closed,0,,,1,2022-11-29T19:34:24Z,2022-11-30T23:30:41Z,2022-11-30T23:30:41Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7334,"It seems that the relevant issue was fixed in 2.10.0 https://github.com/h5py/h5py/commit/466181b178c1b8a5bfa6fb8f217319e021f647e0 I'm not sure how far back you want to fix things. I'm hoping to test this on the CI. I found this since I've been auditing slowdowns in our codebase, which has caused me to review much of the reading pipeline. Do you want to add a test for h5py>=2.10.0? Or can we assume that users won't install things together. https://pypi.org/project/h5py/2.10.0/ I could for example set the backend to not be available if a version of h5py that is too old is detected. One could alternatively, just keep the code here. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7334/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1428274982,PR_kwDOAMm_X85BzXXR,7236,Expand benchmarks for dataset insertion and creation,90008,closed,0,,,8,2022-10-29T13:55:19Z,2022-10-31T15:04:13Z,2022-10-31T15:03:58Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7236,"Taken from discussions in https://github.com/pydata/xarray/issues/7224#issuecomment-1292216344 Thank you @Illviljan - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7236/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1428264468,PR_kwDOAMm_X85BzVOE,7235,Fix type in benchmarks/merge.py,90008,closed,0,,,0,2022-10-29T13:28:12Z,2022-10-29T15:52:45Z,2022-10-29T15:52:45Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7235,"I don't think this affects what is displayed that is determined by param_names - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7235/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1423321834,PR_kwDOAMm_X85Bi5BN,7222,Actually make the fast code path return early for Aligner.align,90008,closed,0,,,6,2022-10-26T01:59:09Z,2022-10-28T16:22:36Z,2022-10-28T16:22:35Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7222,"In relation to my other PR. Without this PR ![image](https://user-images.githubusercontent.com/90008/197916473-0149747e-25b0-41d6-921d-1fad62a23699.png) With the early return ![image](https://user-images.githubusercontent.com/90008/197916546-9ea9a020-2683-4d62-805a-b386835d61c0.png)
Removing the frivolous copy (does not pass tests) ![image](https://user-images.githubusercontent.com/90008/197916632-dbc89c21-94a9-4b92-af11-5b1fa5f5cddd.png)
Code for benchmark ```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 # Everybody is lazy loading now, so lets force modules to get instantiated dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() # tqdm = iter for i in tqdm(range(N)): time_start = perf_counter() dataset[f""var{i}""] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start # %% from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel(""Number of existing variables"") plt.ylabel(""Time to add a variables (ms)"") plt.ylim([0, 10]) plt.grid(True) ```
xref: https://github.com/pydata/xarray/pull/7221 - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7222/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1423312198,PR_kwDOAMm_X85Bi3Dp,7221,Remove debugging slow assert statement,90008,closed,0,,,13,2022-10-26T01:43:08Z,2022-10-28T02:49:44Z,2022-10-28T02:49:44Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7221,"We've been trying to understand why our code is slow. One part is that we use xarray.Datasets almost like dictionaries for our data. The following code is quite common for us ```python import xarray as xr dataset = xr.Dataset() dataset['a'] = 1 dataset['b'] = 2 ``` However, through benchmarks, it became obvious that the `merge_core` method of xarray was causing alot of slowdowns. `main` branch: ![image](https://user-images.githubusercontent.com/90008/197914741-c920046a-e957-4584-9e00-082575fd1f6c.png) With this merge request: ![image](https://user-images.githubusercontent.com/90008/197914642-9d9439a3-397b-4f04-abb2-ddc62c7b4849.png) ```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 # Everybody is lazy loading now, so lets force modules to get instantiated dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() for i in tqdm(range(N)): time_start = perf_counter() dataset[f""var{i}""] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start # %% from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel(""Number of existing variables"") plt.ylabel(""Time to add a variables (ms)"") plt.ylim([0, 50]) plt.grid(True) ``` - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7221/reactions"", ""total_count"": 2, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 2, ""eyes"": 0}",,,13221727,pull 1423916687,PR_kwDOAMm_X85Bk2By,7223,Dataset insertion benchmark,90008,closed,0,,,2,2022-10-26T12:09:14Z,2022-10-27T15:38:09Z,2022-10-27T15:38:09Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7223,"xref: https://github.com/pydata/xarray/pull/7221 - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7223/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1410575877,PR_kwDOAMm_X85A4LHp,7172,Lazy import dask.distributed to reduce import time of xarray,90008,closed,0,,,9,2022-10-16T18:25:31Z,2022-10-18T17:41:50Z,2022-10-18T17:06:34Z,CONTRIBUTOR,,0,pydata/xarray/pulls/7172,"I was auditing the import time of my software and found that distributed added a non insignificant amount of time to the import of xarray: Using `tuna`, one can find that the following are sources of delay in import time for xarray: To audit, one can use the the command ``` python -X importtime -c ""import numpy as np; import pandas as pd; import dask.array; import xarray as xr"" 2>import.log && tuna import.lo ``` The command as is, breaks out the import time of numpy, pandas, and dask.array to allow you to focus on ""other"" costs within xarray. Main branch: ![image](https://user-images.githubusercontent.com/90008/196051640-8bb182a9-fbb0-4b83-a39d-a576fec25249.png) Proposed: ![image](https://user-images.githubusercontent.com/90008/196051596-34d87232-5cb9-4f3d-84f9-d2ec969c95ce.png) One would be tempted to think that this is due to xarray.testing and xarray.tutorial but those just move the imports one level down in tuna graphs. ![image](https://user-images.githubusercontent.com/90008/196051584-7895b64c-319a-4f9f-8327-b254b6571551.png) - [x] ~~Closes~~ - [x] ~~Tests added~~ - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7172/reactions"", ""total_count"": 3, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 3, ""eyes"": 0}",,,13221727,pull 1098924491,PR_kwDOAMm_X84wyU7M,6154,Use base ImportError not MoudleNotFoundError when testing for plugins,90008,closed,0,,,4,2022-01-11T09:48:36Z,2022-01-11T10:28:51Z,2022-01-11T10:24:57Z,CONTRIBUTOR,,0,pydata/xarray/pulls/6154,"Admittedly i had a pretty broken environment (I manually uninstalled C dependencies for python packages installed with conda), but I still expected xarray to ""work"" with a different backend. I hope the comments in the code explain why `ImportError` is preferred to `ModuleNotFoundError`. Thank you for considering. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6154/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 347962055,MDU6SXNzdWUzNDc5NjIwNTU=,2347,Serialization of just coordinates,90008,closed,0,,,6,2018-08-06T15:03:29Z,2022-01-09T04:28:49Z,2022-01-09T04:28:49Z,CONTRIBUTOR,,,,"In the search for the perfect data storage mechanism, I find myself needing to store some of the images I am generating the metadata seperately. It is really useful for me to serialize just the coordinates of my DataArray. My serialization method of choice is json since it allows me to read the metadata with just a text editor. For that, having the coordinates as a self contained dictionary is really important. Currently, I convert just the coordinates to a [dataset](http://xarray.pydata.org/en/stable/data-structures.html#coordinates-methods), and serialize that. The code looks something like this: ```python import xarray as xr import numpy as np # Setup an array with coordinates n = np.zeros(3) coords={'x': np.arange(3)} m = xr.DataArray(n, dims=['x'], coords=coords) coords_dataset_dict = m.coords.to_dataset().to_dict() coords_dict = coords_dataset_dict['coords'] # Read/Write dictionary to JSON file # This works, but I'm essentially creating an emtpy dataset for it coords_set = xr.Dataset.from_dict(coords_dataset_dict) coords2 = coords_set.coords # so many `coords` :D m2 = xr.DataArray(np.zeros(shape=m.shape), dims=m.dims, coords=coords2) ``` Would encapsulating this functionality in the `Coordinates` class be accepted as a PR? It would add 2 functions that would look like: ```python def to_dict(self): # offload the heavy lifting to the Dataset class return self.to_dataset().to_dict()['coords'] def from_dict(self, d): # Offload the heavy lifting again to the Dataset class d_dataset = {'dims': [], 'attrs': [], 'coords': d} return Dataset.from_dict(d_dataset).coords ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2347/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 689390592,MDU6SXNzdWU2ODkzOTA1OTI=,4394,Is it possible to append_dim to netcdf stores,90008,closed,0,,,2,2020-08-31T18:02:46Z,2020-08-31T22:11:10Z,2020-08-31T22:11:09Z,CONTRIBUTOR,,,," **Is your feature request related to a problem? Please describe.** Feature request: It seems that it should be possible to append to netcdf4 stores along the unlimited dimensions. Is there an example of this? **Describe the solution you'd like** I would like the following code to be valid: ```python from xarray.tests.test_dataset import create_append_test_data ds, ds_to_append, ds_with_new_var = create_append_test_data() filename = 'test_dataset.nc' # Choose any one of # engine : {'netcdf4', 'scipy', 'h5netcdf'} engine = 'netcdf4' ds.to_netcdf(filename, mode='w', unlimited_dims=['time'], engine=engine) ds_to_append.to_netcdf(filename, mode='a', unlimited_dims=['time'], engine=engine) ``` **Describe alternatives you've considered** I guess you could use zarr, but the fact that it creates multiple files is a problem. **Additional context** xarray version: 0.16.0 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4394/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 587398134,MDExOlB1bGxSZXF1ZXN0MzkzMzQ5NzIx,3888,[WIP] [DEMO] Add tests for ZipStore for zarr,90008,closed,0,,,6,2020-03-25T02:29:20Z,2020-03-26T04:23:05Z,2020-03-25T21:57:09Z,CONTRIBUTOR,,0,pydata/xarray/pulls/3888," - [ ] Related to #3815 - [ ] Tests added - [ ] Passes `isort -rc . && black . && mypy . && flake8` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3888/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 335608017,MDU6SXNzdWUzMzU2MDgwMTc=,2251,netcdf roundtrip fails to preserve the shape of numpy arrays in attributes,90008,closed,0,,,5,2018-06-25T23:52:07Z,2018-08-29T16:06:29Z,2018-08-29T16:06:28Z,CONTRIBUTOR,,,,"#### Code Sample ```python import numpy as np import xarray as xr a = xr.DataArray(np.zeros((3, 3)), dims=('y', 'x')) a.attrs['my_array'] = np.arange(6, dtype='uint8').reshape(2, 3) a.to_netcdf('a.nc') b = xr.open_dataarray('a.nc') b.load() assert np.all(b == a) print('all arrays equal') assert b.dtype == a.dtype print('dtypes equal') print(a.my_array.shape) print(b.my_array.shape) assert a.my_array.shape == b.my_array.shape ``` #### Problem description I have some metadata that is in the form of numpy arrays. I would think that it should round trip with netcdf. #### Expected Output equal shapes inside the metadata #### Output of ``xr.show_versions()``
INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Linux OS-release: 4.16.15-300.fc28.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.7 pandas: 0.23.0 numpy: 1.14.4 scipy: 1.1.0 netCDF4: 1.4.0 h5netcdf: 0.6.1 h5py: 2.8.0 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.17.5 distributed: 1.21.8 matplotlib: 2.2.2 cartopy: None seaborn: None setuptools: 39.2.0 pip: 9.0.3 conda: None pytest: 3.6.1 IPython: 6.4.0 sphinx: 1.7.5
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2251/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 347712372,MDExOlB1bGxSZXF1ZXN0MjA2MjQ3MjE4,2344,FutureWarning: creation of DataArrays w/ coords Dataset,90008,closed,0,,,7,2018-08-05T16:34:59Z,2018-08-06T16:02:09Z,2018-08-06T16:02:09Z,CONTRIBUTOR,,0,pydata/xarray/pulls/2344,"Previously, this would raise a: FutureWarning: iteration over an xarray.Dataset will change in xarray v0.11 to only include data variables, not coordinates. Iterate over the Dataset.variables property instead to preserve existing behavior in a forwards compatible manner. - [ ] Closes #xxxx (remove if there is no corresponding issue, which should only be the case for minor changes) - [ ] Tests added (for all bug fixes or enhancements) - [ ] Tests passed (for all non-documentation changes) - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2344/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 347558405,MDU6SXNzdWUzNDc1NTg0MDU=,2340,expand_dims erases named dim in the array's coordinates,90008,closed,0,,,5,2018-08-03T23:00:07Z,2018-08-05T01:15:49Z,2018-08-04T03:39:49Z,CONTRIBUTOR,,,,"#### Code Sample, a copy-pastable example if possible ```python # %% import xarray as xa import numpy as np n = np.zeros((3, 2)) data = xa.DataArray(n, dims=['y', 'x'], coords={'y':range(3), 'x':range(2)}) data = data.assign_coords(z=xa.DataArray(np.arange(6).reshape((3, 2)), dims=['y', 'x'])) print('Original Data') print('=============') print(data) # %% my_slice = data[0, 1] print(""Sliced data"") print(""==========="") print(""z coordinate remembers it's own x value"") print(f'x = {my_slice.z.x}') # %% expanded_slice = data[0, 1].expand_dims('x') print(""expanded slice"") print(""=============="") print(""forgot that 'z' had 'x' coordinates"") print(""but remembered it had a 'y' coordinate"") print(f""z = {expanded_slice.z}"") print(expanded_slice.z.x) ``` Output: ``` Original Data ============= array([[0., 0.], [0., 0.], [0., 0.]]) Coordinates: * y (y) int32 0 1 2 * x (x) int32 0 1 z (y, x) int32 0 1 2 3 4 5 Sliced data =========== z coordinate remembers it's own x value x = array(1) Coordinates: y int32 0 x int32 1 z int32 1 expanded slice ============== forgot that 'z' had 'x' coordinates but remembered it had a 'y' coordinate z = array(1) Coordinates: y int32 0 z int32 1 AttributeError: 'DataArray' object has no attribute 'x' ``` #### Problem description The coordinate used to have an explicit dimension. When we expanded the dimension, that information should not be erased. Note that information about other coordinates are maintained. #### The challenge The coordinates probably have fewer dimensions than the original data. I'm not sure about xarray's model, but a few challenges come to mind: 1. is the relative order of dimensions maintained between data in the same dataset/dataarray? 2. Can coordinates have MORE dimensions than the array itself? The answer to these two questions might make or break If not, then this becomes a very difficult problem to solve since we don't know where to insert this new dimension in the coordinate array. #### Output of ``xr.show_versions()``
xa.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.6.final.0 python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 79 Stepping 1, GenuineIntel byteorder: little LC_ALL: None LANG: en LOCALE: None.None xarray: 0.10.7 pandas: 0.23.1 numpy: 1.14.3 scipy: 1.1.0 netCDF4: 1.4.0 h5netcdf: 0.6.1 h5py: 2.8.0 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.18.1 distributed: 1.22.0 matplotlib: 2.2.2 cartopy: None seaborn: None setuptools: 39.2.0 pip: 9.0.3 conda: None pytest: 3.7.1 IPython: 6.4.0 sphinx: 1.7.5
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2340/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue