issues
25 rows where state = "closed" and user = 90008 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2129180716 | PR_kwDOAMm_X85mld8X | 8736 | Make list_chunkmanagers more resilient to broken entrypoints | hmaarrfk 90008 | closed | 0 | 6 | 2024-02-11T21:37:38Z | 2024-03-13T17:54:02Z | 2024-03-13T17:54:02Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8736 | As I'm a developing my custom chunk manager, I'm often checking out between my development branch and production branch breaking the entrypoint. This made xarray impossible to import unless I re-ran This should help xarray be more resilient in other software's bugs in case they install malformed entrypoints Example: ```python
Thank you for considering.
This is mostly a quality of life thing for developers, I don't see this as a user visible change. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2131345470 | PR_kwDOAMm_X85ms1Q6 | 8738 | Don't break users that were already using ChunkManagerEntrypoint | hmaarrfk 90008 | closed | 0 | 1 | 2024-02-13T02:17:55Z | 2024-02-13T15:37:54Z | 2024-02-13T03:21:32Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8738 | For example, you just broke cubed https://github.com/xarray-contrib/cubed-xarray/blob/main/cubed_xarray/cubedmanager.py#L15 Not sure how much you care, it didn't seem like anybody other than me ever tried this module on github...
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2034395026 | PR_kwDOAMm_X85hnUnc | 8534 | Point users to where in their code they should make mods for Dataset.dims | hmaarrfk 90008 | closed | 0 | 8 | 2023-12-10T14:31:29Z | 2023-12-10T18:50:10Z | 2023-12-10T18:23:42Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8534 | Its somewhat annoying to get warnings that point to a line within a library where the warning is issued. It really makes it unclear what one needs to change. This points to the user's access of the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1429172192 | I_kwDOAMm_X85VL2_g | 7239 | include/exclude lists in Dataset.expand_dims | hmaarrfk 90008 | closed | 0 | 6 | 2022-10-31T03:01:52Z | 2023-11-05T06:29:06Z | 2023-11-05T06:29:06Z | CONTRIBUTOR | Is your feature request related to a problem?I would like to be able to expand the dimensions of a dataset, but most of the time, I only want to expand the datasets of a few key variables. It would be nice if there were some kind of filter mechanism. Describe the solution you'd like```python import xarray as xr dataset = xr.Dataset(data_vars={'foo': 1, 'bar': 2}) dataset.expand_dims("zar", include_variables=["foo"]) Only foo is expanded, bar is left alone.``` Describe alternatives you've consideredWriting my own function. I'll probably do this. Subclassing. Too confusing and easy to "diverge" from you all when you do decide to implment this. Additional contextFor large datasets, you likely just want some key parameters expanded, and not all parameters expanded. xarray version: 2022.10.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1731320789 | PR_kwDOAMm_X85Rougi | 7883 | Avoid one call to len when getting ndim of Variables | hmaarrfk 90008 | closed | 0 | 3 | 2023-05-29T23:37:10Z | 2023-07-03T15:44:32Z | 2023-07-03T15:44:31Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7883 | I admit this is a super micro optimization but it avoids in certain cases the creation of a tuple, and a call to len on it. I hit this as I was trying to understand why Variable indexing was so much slower than numpy indexing. It seems that bounds checking in python is just slower than in C. Feel free to close this one if you don't want this kind of optimization.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7883/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1428549868 | I_kwDOAMm_X85VJfDs | 7237 | The new NON_NANOSECOND_WARNING is not very nice to end users | hmaarrfk 90008 | closed | 0 | 5 | 2022-10-30T01:56:59Z | 2023-05-09T12:52:54Z | 2022-11-04T20:13:20Z | CONTRIBUTOR | What is your issue?The new nanosecond warning doesn't really point anybody to where they should change their code. Nor does it really tell them how to fix it.
I think at the very least, the stacklevel should be specified when calling the It isn't really pretty, but I've been passing a parameter when I expect to pass up a warning to the end user: eg. https://github.com/vispy/vispy/pull/2405 However, others have not liked that approach. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1306457778 | I_kwDOAMm_X85N3vay | 6791 | get_data or get_varibale method | hmaarrfk 90008 | closed | 0 | 3 | 2022-07-15T20:24:31Z | 2023-04-29T03:40:01Z | 2023-04-29T03:40:01Z | CONTRIBUTOR | Is your feature request related to a problem?I often store a few scalars or arrays in xarray containers. However, when I want to optionally address their data the code I have to run ```python import xarray as xr dataset = xr.Dataset() my_variable = dataset.get('my_variable', None) if my_variable is not None: my_variable = my_variable.data else: my_variable = np.asarray(1.0) # the default value I actually want ``` Describe the solution you'd like```python import xarray as xr dataset = xr.Dataset() my_variable = dataset.get_data('my_variable', np.asarray(1.0)) ``` Describe alternatives you've consideredNo response Additional contextThank you! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1675299031 | I_kwDOAMm_X85j2wjX | 7770 | Provide a public API for adding new backends | hmaarrfk 90008 | closed | 0 | 3 | 2023-04-19T17:06:24Z | 2023-04-20T00:15:23Z | 2023-04-20T00:15:23Z | CONTRIBUTOR | Is your feature request related to a problem?I understand that this is a double edge sword. but we were relying on https://github.com/pydata/xarray/pull/7523 Describe the solution you'd likeSome agreed upon way that we could create a new backend. This would allow users to provide more custom parameters to file creation attributes and other options that are currently not exposed via xarray. I've used this to overwrite some parameters like netcdf global variables. I've also used this to add I did it through a custom backend because it felt like a contentious feature at the time. (I really do think it helps performance). Describe alternatives you've consideredA deprecation cycle in the future??? Maybe this could have been acheived with the definition of Additional contextWe used this to define the alignment within a file. netcdf4 exposed this as a global variable so we have to somewhat hack around it just before creation time. I mean, you can probably say: "Doing this is too complicated, we don't want to give any guarantees on this front." I would agree with you..... |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
690546795 | MDExOlB1bGxSZXF1ZXN0NDc3NDIwMTkz | 4400 | [WIP] Support nano second time encoding. | hmaarrfk 90008 | closed | 0 | 10 | 2020-09-02T00:16:04Z | 2023-03-26T20:59:00Z | 2023-03-26T20:08:50Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4400 | Not too sure i have the bandwidth to complete this seeing as cftime and datetime don't have nanoseconds, but maybe it can help somebody.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1475567394 | PR_kwDOAMm_X85ESe3u | 7356 | Avoid loading entire dataset by getting the nbytes in an array | hmaarrfk 90008 | closed | 0 | 14 | 2022-12-05T03:29:53Z | 2023-03-17T17:31:22Z | 2022-12-12T16:46:40Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7356 | Using Sad.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
689502005 | MDExOlB1bGxSZXF1ZXN0NDc2NTM3Mzk3 | 4395 | WIP: Ensure that zarr.ZipStores are closed | hmaarrfk 90008 | closed | 0 | 4 | 2020-08-31T20:57:49Z | 2023-01-31T21:39:15Z | 2023-01-31T21:38:23Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4395 | ZipStores aren't always closed making it hard to use them as fluidly as regular zarr stores.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1468595351 | PR_kwDOAMm_X85D6oci | 7334 | Remove code used to support h5py<2.10.0 | hmaarrfk 90008 | closed | 0 | 1 | 2022-11-29T19:34:24Z | 2022-11-30T23:30:41Z | 2022-11-30T23:30:41Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7334 | It seems that the relevant issue was fixed in 2.10.0 https://github.com/h5py/h5py/commit/466181b178c1b8a5bfa6fb8f217319e021f647e0 I'm not sure how far back you want to fix things. I'm hoping to test this on the CI. I found this since I've been auditing slowdowns in our codebase, which has caused me to review much of the reading pipeline. Do you want to add a test for h5py>=2.10.0? Or can we assume that users won't install things together. https://pypi.org/project/h5py/2.10.0/ I could for example set the backend to not be available if a version of h5py that is too old is detected. One could alternatively, just keep the code here.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1428274982 | PR_kwDOAMm_X85BzXXR | 7236 | Expand benchmarks for dataset insertion and creation | hmaarrfk 90008 | closed | 0 | 8 | 2022-10-29T13:55:19Z | 2022-10-31T15:04:13Z | 2022-10-31T15:03:58Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7236 | Taken from discussions in https://github.com/pydata/xarray/issues/7224#issuecomment-1292216344 Thank you @Illviljan
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7236/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1428264468 | PR_kwDOAMm_X85BzVOE | 7235 | Fix type in benchmarks/merge.py | hmaarrfk 90008 | closed | 0 | 0 | 2022-10-29T13:28:12Z | 2022-10-29T15:52:45Z | 2022-10-29T15:52:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7235 | I don't think this affects what is displayed that is determined by param_names
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1423321834 | PR_kwDOAMm_X85Bi5BN | 7222 | Actually make the fast code path return early for Aligner.align | hmaarrfk 90008 | closed | 0 | 6 | 2022-10-26T01:59:09Z | 2022-10-28T16:22:36Z | 2022-10-28T16:22:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7222 | In relation to my other PR. Without this PR
With the early return
Removing the frivolous copy (does not pass tests)Code for benchmark```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 # Everybody is lazy loading now, so lets force modules to get instantiated dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() # tqdm = iter for i in tqdm(range(N)): time_start = perf_counter() dataset[f"var{i}"] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start # %% from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel("Number of existing variables") plt.ylabel("Time to add a variables (ms)") plt.ylim([0, 10]) plt.grid(True) ```xref: https://github.com/pydata/xarray/pull/7221
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1423312198 | PR_kwDOAMm_X85Bi3Dp | 7221 | Remove debugging slow assert statement | hmaarrfk 90008 | closed | 0 | 13 | 2022-10-26T01:43:08Z | 2022-10-28T02:49:44Z | 2022-10-28T02:49:44Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7221 | We've been trying to understand why our code is slow. One part is that we use xarray.Datasets almost like dictionaries for our data. The following code is quite common for us
However, through benchmarks, it became obvious that the With this merge request:
```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 Everybody is lazy loading now, so lets force modules to get instantiateddummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() for i in tqdm(range(N)): time_start = perf_counter() dataset[f"var{i}"] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start %%from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel("Number of existing variables") plt.ylabel("Time to add a variables (ms)") plt.ylim([0, 50]) plt.grid(True) ```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7221/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 } |
xarray 13221727 | pull | |||||
1423916687 | PR_kwDOAMm_X85Bk2By | 7223 | Dataset insertion benchmark | hmaarrfk 90008 | closed | 0 | 2 | 2022-10-26T12:09:14Z | 2022-10-27T15:38:09Z | 2022-10-27T15:38:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7223 | xref: https://github.com/pydata/xarray/pull/7221
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1410575877 | PR_kwDOAMm_X85A4LHp | 7172 | Lazy import dask.distributed to reduce import time of xarray | hmaarrfk 90008 | closed | 0 | 9 | 2022-10-16T18:25:31Z | 2022-10-18T17:41:50Z | 2022-10-18T17:06:34Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7172 | I was auditing the import time of my software and found that distributed added a non insignificant amount of time to the import of xarray: Using To audit, one can use the the command
Proposed:
One would be tempted to think that this is due to xarray.testing and xarray.tutorial but those just move the imports one level down in tuna graphs.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7172/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 } |
xarray 13221727 | pull | |||||
1098924491 | PR_kwDOAMm_X84wyU7M | 6154 | Use base ImportError not MoudleNotFoundError when testing for plugins | hmaarrfk 90008 | closed | 0 | 4 | 2022-01-11T09:48:36Z | 2022-01-11T10:28:51Z | 2022-01-11T10:24:57Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6154 | Admittedly i had a pretty broken environment (I manually uninstalled C dependencies for python packages installed with conda), but I still expected xarray to "work" with a different backend. I hope the comments in the code explain why Thank you for considering.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
347962055 | MDU6SXNzdWUzNDc5NjIwNTU= | 2347 | Serialization of just coordinates | hmaarrfk 90008 | closed | 0 | 6 | 2018-08-06T15:03:29Z | 2022-01-09T04:28:49Z | 2022-01-09T04:28:49Z | CONTRIBUTOR | In the search for the perfect data storage mechanism, I find myself needing to store some of the images I am generating the metadata seperately. It is really useful for me to serialize just the coordinates of my DataArray. My serialization method of choice is json since it allows me to read the metadata with just a text editor. For that, having the coordinates as a self contained dictionary is really important. Currently, I convert just the coordinates to a dataset, and serialize that. The code looks something like this: ```python import xarray as xr import numpy as np Setup an array with coordinatesn = np.zeros(3) coords={'x': np.arange(3)} m = xr.DataArray(n, dims=['x'], coords=coords) coords_dataset_dict = m.coords.to_dataset().to_dict() coords_dict = coords_dataset_dict['coords'] Read/Write dictionary to JSON fileThis works, but I'm essentially creating an emtpy dataset for itcoords_set = xr.Dataset.from_dict(coords_dataset_dict)
coords2 = coords_set.coords # so many Would encapsulating this functionality in the It would add 2 functions that would look like: ```python def to_dict(self): # offload the heavy lifting to the Dataset class return self.to_dataset().to_dict()['coords'] def from_dict(self, d): # Offload the heavy lifting again to the Dataset class d_dataset = {'dims': [], 'attrs': [], 'coords': d} return Dataset.from_dict(d_dataset).coords ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
689390592 | MDU6SXNzdWU2ODkzOTA1OTI= | 4394 | Is it possible to append_dim to netcdf stores | hmaarrfk 90008 | closed | 0 | 2 | 2020-08-31T18:02:46Z | 2020-08-31T22:11:10Z | 2020-08-31T22:11:09Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. Feature request: It seems that it should be possible to append to netcdf4 stores along the unlimited dimensions. Is there an example of this? Describe the solution you'd like I would like the following code to be valid: ```python from xarray.tests.test_dataset import create_append_test_data ds, ds_to_append, ds_with_new_var = create_append_test_data() filename = 'test_dataset.nc' Choose any one ofengine : {'netcdf4', 'scipy', 'h5netcdf'}engine = 'netcdf4' ds.to_netcdf(filename, mode='w', unlimited_dims=['time'], engine=engine) ds_to_append.to_netcdf(filename, mode='a', unlimited_dims=['time'], engine=engine) ``` Describe alternatives you've considered I guess you could use zarr, but the fact that it creates multiple files is a problem. Additional context xarray version: 0.16.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
587398134 | MDExOlB1bGxSZXF1ZXN0MzkzMzQ5NzIx | 3888 | [WIP] [DEMO] Add tests for ZipStore for zarr | hmaarrfk 90008 | closed | 0 | 6 | 2020-03-25T02:29:20Z | 2020-03-26T04:23:05Z | 2020-03-25T21:57:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3888 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
335608017 | MDU6SXNzdWUzMzU2MDgwMTc= | 2251 | netcdf roundtrip fails to preserve the shape of numpy arrays in attributes | hmaarrfk 90008 | closed | 0 | 5 | 2018-06-25T23:52:07Z | 2018-08-29T16:06:29Z | 2018-08-29T16:06:28Z | CONTRIBUTOR | Code Sample```python import numpy as np import xarray as xr a = xr.DataArray(np.zeros((3, 3)), dims=('y', 'x')) a.attrs['my_array'] = np.arange(6, dtype='uint8').reshape(2, 3) a.to_netcdf('a.nc') b = xr.open_dataarray('a.nc') b.load() assert np.all(b == a) print('all arrays equal') assert b.dtype == a.dtype print('dtypes equal') print(a.my_array.shape) print(b.my_array.shape) assert a.my_array.shape == b.my_array.shape ``` Problem descriptionI have some metadata that is in the form of numpy arrays. I would think that it should round trip with netcdf. Expected Outputequal shapes inside the metadata Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
347712372 | MDExOlB1bGxSZXF1ZXN0MjA2MjQ3MjE4 | 2344 | FutureWarning: creation of DataArrays w/ coords Dataset | hmaarrfk 90008 | closed | 0 | 7 | 2018-08-05T16:34:59Z | 2018-08-06T16:02:09Z | 2018-08-06T16:02:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2344 | Previously, this would raise a: FutureWarning: iteration over an xarray.Dataset will change in xarray v0.11 to only include data variables, not coordinates. Iterate over the Dataset.variables property instead to preserve existing behavior in a forwards compatible manner.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
347558405 | MDU6SXNzdWUzNDc1NTg0MDU= | 2340 | expand_dims erases named dim in the array's coordinates | hmaarrfk 90008 | closed | 0 | 5 | 2018-08-03T23:00:07Z | 2018-08-05T01:15:49Z | 2018-08-04T03:39:49Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python %%import xarray as xa import numpy as np n = np.zeros((3, 2)) data = xa.DataArray(n, dims=['y', 'x'], coords={'y':range(3), 'x':range(2)}) data = data.assign_coords(z=xa.DataArray(np.arange(6).reshape((3, 2)), dims=['y', 'x'])) print('Original Data') print('=============') print(data) %%my_slice = data[0, 1] print("Sliced data") print("===========") print("z coordinate remembers it's own x value") print(f'x = {my_slice.z.x}') %%expanded_slice = data[0, 1].expand_dims('x') print("expanded slice") print("==============") print("forgot that 'z' had 'x' coordinates") print("but remembered it had a 'y' coordinate") print(f"z = {expanded_slice.z}") print(expanded_slice.z.x) ``` Output:
Problem descriptionThe coordinate used to have an explicit dimension. When we expanded the dimension, that information should not be erased. Note that information about other coordinates are maintained. The challengeThe coordinates probably have fewer dimensions than the original data. I'm not sure about xarray's model, but a few challenges come to mind: 1. is the relative order of dimensions maintained between data in the same dataset/dataarray? 2. Can coordinates have MORE dimensions than the array itself? The answer to these two questions might make or break If not, then this becomes a very difficult problem to solve since we don't know where to insert this new dimension in the coordinate array. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);