issue_comments
38 rows where user = 22245117 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- malmans2 · 38 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1572174061 | https://github.com/pydata/xarray/pull/7670#issuecomment-1572174061 | https://api.github.com/repos/pydata/xarray/issues/7670 | IC_kwDOAMm_X85dtXjt | malmans2 22245117 | 2023-06-01T14:34:44Z | 2023-06-01T14:34:44Z | CONTRIBUTOR | The cfgrib notebook in the documentation is broken. I guess it's related to this PR. See: https://docs.xarray.dev/en/stable/examples/ERA5-GRIB-example.html |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Delete built-in cfgrib backend 1639732867 | |
1561328867 | https://github.com/pydata/xarray/issues/5644#issuecomment-1561328867 | https://api.github.com/repos/pydata/xarray/issues/5644 | IC_kwDOAMm_X85dD_zj | malmans2 22245117 | 2023-05-24T15:02:44Z | 2023-05-24T15:02:44Z | CONTRIBUTOR |
Not sure, but I'll take a look! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`polyfit` with weights alters the DataArray in place 955043280 | |
1557440032 | https://github.com/pydata/xarray/issues/5644#issuecomment-1557440032 | https://api.github.com/repos/pydata/xarray/issues/5644 | IC_kwDOAMm_X85c1KYg | malmans2 22245117 | 2023-05-22T15:35:54Z | 2023-05-22T15:35:54Z | CONTRIBUTOR | Hi! I was about to open a new issue about this, but looks like it's a known issue and there's a stale PR... Let me know if I can help to get this fixed! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`polyfit` with weights alters the DataArray in place 955043280 | |
1450112743 | https://github.com/pydata/xarray/issues/7572#issuecomment-1450112743 | https://api.github.com/repos/pydata/xarray/issues/7572 | IC_kwDOAMm_X85Wbvbn | malmans2 22245117 | 2023-03-01T12:58:40Z | 2023-03-01T12:59:06Z | CONTRIBUTOR | Slightly different issue related to the latest release of Looks like nczarr attribute key changed from |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`test_open_nczarr` failing 1603831809 | |
1449751992 | https://github.com/pydata/xarray/issues/7572#issuecomment-1449751992 | https://api.github.com/repos/pydata/xarray/issues/7572 | IC_kwDOAMm_X85WaXW4 | malmans2 22245117 | 2023-03-01T10:00:07Z | 2023-03-01T10:00:07Z | CONTRIBUTOR | See: https://github.com/Unidata/netcdf-c/issues/2647 The problem is that with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`test_open_nczarr` failing 1603831809 | |
1449591384 | https://github.com/pydata/xarray/issues/7572#issuecomment-1449591384 | https://api.github.com/repos/pydata/xarray/issues/7572 | IC_kwDOAMm_X85WZwJY | malmans2 22245117 | 2023-03-01T08:48:21Z | 2023-03-01T08:48:21Z | CONTRIBUTOR | The problem comes from |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`test_open_nczarr` failing 1603831809 | |
1144166243 | https://github.com/pydata/xarray/pull/6636#issuecomment-1144166243 | https://api.github.com/repos/pydata/xarray/issues/6636 | IC_kwDOAMm_X85EMpdj | malmans2 22245117 | 2022-06-01T21:40:42Z | 2022-06-01T21:40:42Z | CONTRIBUTOR | All set. Thanks everyone! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Use `zarr` to validate attrs when writing to zarr 1248389852 | |
1128588208 | https://github.com/pydata/xarray/issues/6610#issuecomment-1128588208 | https://api.github.com/repos/pydata/xarray/issues/6610 | IC_kwDOAMm_X85DROOw | malmans2 22245117 | 2022-05-17T08:40:04Z | 2022-05-17T15:04:04Z | CONTRIBUTOR | I'm getting errors with multi-indexes and ```python import numpy as np import xarray as xr ds = xr.Dataset( dict(a=(("z",), np.ones(10))), coords=dict(b=(("z"), np.arange(2).repeat(5)), c=(("z"), np.arange(5).repeat(2))), ).set_index(bc=["b", "c"]) grouped = ds.groupby("bc") with xr.set_options(use_flox=False): grouped.sum() # OK with xr.set_options(use_flox=True):
grouped.sum() # Error
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Update GroupBy constructor for grouping by multiple variables, dask arrays 1236174701 | |
1124863541 | https://github.com/pydata/xarray/issues/6597#issuecomment-1124863541 | https://api.github.com/repos/pydata/xarray/issues/6597 | IC_kwDOAMm_X85DDA41 | malmans2 22245117 | 2022-05-12T11:12:39Z | 2022-05-12T11:12:39Z | CONTRIBUTOR | Thanks - I think I might be misunderstanding how the new implementation works.
I tried the following changes, but both of them return an error:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`polyval` with timedelta64 coordinates produces wrong results 1233717699 | |
1094221271 | https://github.com/pydata/xarray/pull/6420#issuecomment-1094221271 | https://api.github.com/repos/pydata/xarray/issues/6420 | IC_kwDOAMm_X85BOH3X | malmans2 22245117 | 2022-04-10T08:49:07Z | 2022-04-10T08:49:07Z | CONTRIBUTOR |
Documentation should be in good shape now. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add support in the "zarr" backend for reading NCZarr data 1183534905 | |
1093846662 | https://github.com/pydata/xarray/pull/6420#issuecomment-1093846662 | https://api.github.com/repos/pydata/xarray/issues/6420 | IC_kwDOAMm_X85BMsaG | malmans2 22245117 | 2022-04-09T10:04:21Z | 2022-04-09T10:04:21Z | CONTRIBUTOR | Thanks for the review @shoyer! This should be good to go now. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add support in the "zarr" backend for reading NCZarr data 1183534905 | |
1091147424 | https://github.com/pydata/xarray/pull/6420#issuecomment-1091147424 | https://api.github.com/repos/pydata/xarray/issues/6420 | IC_kwDOAMm_X85BCZag | malmans2 22245117 | 2022-04-07T06:56:41Z | 2022-04-07T06:56:41Z | CONTRIBUTOR | The code now looks for I'm not sure what's the best approach with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add support in the "zarr" backend for reading NCZarr data 1183534905 | |
1081553127 | https://github.com/pydata/xarray/issues/6374#issuecomment-1081553127 | https://api.github.com/repos/pydata/xarray/issues/6374 | IC_kwDOAMm_X85AdzDn | malmans2 22245117 | 2022-03-29T08:01:36Z | 2022-03-29T08:01:36Z | CONTRIBUTOR | Thanks! #6420 looks at |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Should the zarr backend support NCZarr conventions? 1172229856 | |
1081139207 | https://github.com/pydata/xarray/issues/6374#issuecomment-1081139207 | https://api.github.com/repos/pydata/xarray/issues/6374 | IC_kwDOAMm_X85AcOAH | malmans2 22245117 | 2022-03-28T21:01:19Z | 2022-03-28T21:01:19Z | CONTRIBUTOR | Adding support for reading I'm not sure whether it is better to (i) add direct support for |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Should the zarr backend support NCZarr conventions? 1172229856 | |
1081134778 | https://github.com/pydata/xarray/pull/6420#issuecomment-1081134778 | https://api.github.com/repos/pydata/xarray/issues/6420 | IC_kwDOAMm_X85AcM66 | malmans2 22245117 | 2022-03-28T20:55:45Z | 2022-03-28T20:55:45Z | CONTRIBUTOR | The errors on Windows appear to be related to the fill_value written by the "netcdf4" backend in ".zattrs", so probably needs to be addressed in netcdf4-python or netcdf-c. The fill_value is Nan rather than NaN. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add support in the "zarr" backend for reading NCZarr data 1183534905 | |
882464988 | https://github.com/pydata/xarray/issues/5495#issuecomment-882464988 | https://api.github.com/repos/pydata/xarray/issues/5495 | IC_kwDOAMm_X840mVjc | malmans2 22245117 | 2021-07-19T11:15:03Z | 2021-07-19T11:15:03Z | CONTRIBUTOR | @shoyer I added |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add `typing-extensions` to the list of dependencies? 925444927 | |
864470759 | https://github.com/pydata/xarray/issues/5495#issuecomment-864470759 | https://api.github.com/repos/pydata/xarray/issues/5495 | MDEyOklzc3VlQ29tbWVudDg2NDQ3MDc1OQ== | malmans2 22245117 | 2021-06-19T22:20:35Z | 2021-06-19T22:20:35Z | CONTRIBUTOR | Looks like there isn't an action that only installs |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add `typing-extensions` to the list of dependencies? 925444927 | |
860750129 | https://github.com/pydata/xarray/pull/5445#issuecomment-860750129 | https://api.github.com/repos/pydata/xarray/issues/5445 | MDEyOklzc3VlQ29tbWVudDg2MDc1MDEyOQ== | malmans2 22245117 | 2021-06-14T14:54:10Z | 2021-06-14T14:54:10Z | CONTRIBUTOR | Thanks @crusaderky! I think all your suggestions are now implemented. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add `xr.unify_chunks()` top level method 912932344 | |
855375205 | https://github.com/pydata/xarray/issues/5435#issuecomment-855375205 | https://api.github.com/repos/pydata/xarray/issues/5435 | MDEyOklzc3VlQ29tbWVudDg1NTM3NTIwNQ== | malmans2 22245117 | 2021-06-06T10:25:07Z | 2021-06-06T10:25:07Z | CONTRIBUTOR | So under the hood use dask In the example above, if my target is chunksize (1, 1), |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Broadcast does not return Datasets with unified chunks 911393744 | |
850219328 | https://github.com/pydata/xarray/issues/5368#issuecomment-850219328 | https://api.github.com/repos/pydata/xarray/issues/5368 | MDEyOklzc3VlQ29tbWVudDg1MDIxOTMyOA== | malmans2 22245117 | 2021-05-28T07:41:38Z | 2021-05-28T07:41:38Z | CONTRIBUTOR | Just ran into this bug, see #5393. Hopefully no one was already working on it... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
ds.mean('dim') drops strings dataarrays, even when the 'dim' is not dimension of the string dataarray 900502141 | |
849897412 | https://github.com/pydata/xarray/issues/5387#issuecomment-849897412 | https://api.github.com/repos/pydata/xarray/issues/5387 | MDEyOklzc3VlQ29tbWVudDg0OTg5NzQxMg== | malmans2 22245117 | 2021-05-27T19:51:41Z | 2021-05-27T19:51:41Z | CONTRIBUTOR | All tests are passing with |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
KeyError when trying to select a list of DataArrays with different name type 903983811 | |
849818227 | https://github.com/pydata/xarray/issues/5387#issuecomment-849818227 | https://api.github.com/repos/pydata/xarray/issues/5387 | MDEyOklzc3VlQ29tbWVudDg0OTgxODIyNw== | malmans2 22245117 | 2021-05-27T17:41:37Z | 2021-05-27T17:41:37Z | CONTRIBUTOR |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
KeyError when trying to select a list of DataArrays with different name type 903983811 | |
849815977 | https://github.com/pydata/xarray/issues/5387#issuecomment-849815977 | https://api.github.com/repos/pydata/xarray/issues/5387 | MDEyOklzc3VlQ29tbWVudDg0OTgxNTk3Nw== | malmans2 22245117 | 2021-05-27T17:37:52Z | 2021-05-27T17:37:52Z | CONTRIBUTOR | I think |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
KeyError when trying to select a list of DataArrays with different name type 903983811 | |
846452788 | https://github.com/pydata/xarray/pull/5362#issuecomment-846452788 | https://api.github.com/repos/pydata/xarray/issues/5362 | MDEyOklzc3VlQ29tbWVudDg0NjQ1Mjc4OA== | malmans2 22245117 | 2021-05-22T19:24:26Z | 2021-05-22T19:24:26Z | CONTRIBUTOR |
All set I think! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Check dimensions before applying weighted operations 898841079 | |
669985622 | https://github.com/pydata/xarray/issues/4319#issuecomment-669985622 | https://api.github.com/repos/pydata/xarray/issues/4319 | MDEyOklzc3VlQ29tbWVudDY2OTk4NTYyMg== | malmans2 22245117 | 2020-08-06T15:05:57Z | 2020-08-06T15:05:57Z | CONTRIBUTOR | Got it! Thanks @keewis. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
KeyError when faceting along time dimensions 674379292 | |
669982126 | https://github.com/pydata/xarray/issues/4319#issuecomment-669982126 | https://api.github.com/repos/pydata/xarray/issues/4319 | MDEyOklzc3VlQ29tbWVudDY2OTk4MjEyNg== | malmans2 22245117 | 2020-08-06T15:00:04Z | 2020-08-06T15:00:04Z | CONTRIBUTOR | Here is the full error: ``` KeyError Traceback (most recent call last) <ipython-input-9-c00f9ae5bb67> in <module> ----> 1 airtemps['air'].isel(time=slice(2)).plot(col='time') ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/plot/plot.py in call(self, kwargs) 444 445 def call(self, kwargs): --> 446 return plot(self._da, **kwargs) 447 448 # we can't use functools.wraps here since that also modifies the name / qualname ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/plot/plot.py in plot(darray, row, col, col_wrap, ax, hue, rtol, subplot_kws, kwargs) 198 kwargs["ax"] = ax 199 --> 200 return plotfunc(darray, kwargs) 201 202 ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/plot/plot.py in newplotfunc(darray, x, y, figsize, size, aspect, ax, row, col, col_wrap, xincrease, yincrease, add_colorbar, add_labels, vmin, vmax, cmap, center, robust, extend, levels, infer_intervals, colors, subplot_kws, cbar_ax, cbar_kwargs, xscale, yscale, xticks, yticks, xlim, ylim, norm, kwargs) 636 # Need the decorated plotting function 637 allargs["plotfunc"] = globals()[plotfunc.name] --> 638 return _easy_facetgrid(darray, kind="dataarray", allargs) 639 640 plt = import_matplotlib_pyplot() ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/plot/facetgrid.py in _easy_facetgrid(data, plotfunc, kind, x, y, row, col, col_wrap, sharex, sharey, aspect, size, subplot_kws, ax, figsize, kwargs) 642 643 if kind == "dataarray": --> 644 return g.map_dataarray(plotfunc, x, y, kwargs) 645 646 if kind == "dataset": ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/plot/facetgrid.py in map_dataarray(self, func, x, y, **kwargs) 263 # Get x, y labels for the first subplot 264 x, y = _infer_xy_labels( --> 265 darray=self.data.loc[self.name_dicts.flat[0]], 266 x=x, 267 y=y, ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/core/dataarray.py in getitem(self, key) 196 labels = indexing.expanded_indexer(key, self.data_array.ndim) 197 key = dict(zip(self.data_array.dims, labels)) --> 198 return self.data_array.sel(**key) 199 200 def setitem(self, key, value) -> None: ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, indexers_kwargs) 1152 method=method, 1153 tolerance=tolerance, -> 1154 indexers_kwargs, 1155 ) 1156 return self._from_temp_dataset(ds) ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2100 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel") 2101 pos_indexers, new_indexes = remap_label_indexers( -> 2102 self, indexers=indexers, method=method, tolerance=tolerance 2103 ) 2104 result = self.isel(indexers=pos_indexers, drop=drop) ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs) 395 396 pos_indexers, new_indexes = indexing.remap_label_indexers( --> 397 obj, v_indexers, method=method, tolerance=tolerance 398 ) 399 # attach indexer's coordinate to pos_indexers ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance) 268 coords_dtype = data_obj.coords[dim].dtype 269 label = maybe_cast_to_coords_dtype(label, coords_dtype) --> 270 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance) 271 pos_indexers[dim] = idxr 272 if new_idx is not None: ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance) 188 else: 189 indexer = index.get_loc( --> 190 label.item(), method=method, tolerance=tolerance 191 ) 192 elif label.dtype.kind == "b": ~/anaconda3/envs/ospy_tests/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py in get_loc(self, key, method, tolerance) 620 else: 621 # unrecognized type --> 622 raise KeyError(key) 623 624 try: KeyError: 1356998400000000000 ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
KeyError when faceting along time dimensions 674379292 | |
633651638 | https://github.com/pydata/xarray/issues/4077#issuecomment-633651638 | https://api.github.com/repos/pydata/xarray/issues/4077 | MDEyOklzc3VlQ29tbWVudDYzMzY1MTYzOA== | malmans2 22245117 | 2020-05-25T16:54:55Z | 2020-05-25T17:49:03Z | CONTRIBUTOR | Yup, happy to do it. Just one doubt. I think in cases where |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset overwrites variables with different values but overlapping coordinates 620514214 | |
633586248 | https://github.com/pydata/xarray/issues/4077#issuecomment-633586248 | https://api.github.com/repos/pydata/xarray/issues/4077 | MDEyOklzc3VlQ29tbWVudDYzMzU4NjI0OA== | malmans2 22245117 | 2020-05-25T13:59:18Z | 2020-05-25T13:59:18Z | CONTRIBUTOR | Nevermind, it looks like if the check goes into |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset overwrites variables with different values but overlapping coordinates 620514214 | |
633577882 | https://github.com/pydata/xarray/issues/4077#issuecomment-633577882 | https://api.github.com/repos/pydata/xarray/issues/4077 | MDEyOklzc3VlQ29tbWVudDYzMzU3Nzg4Mg== | malmans2 22245117 | 2020-05-25T13:39:37Z | 2020-05-25T13:39:37Z | CONTRIBUTOR | If What about something like this? I think it would cover all possibilities, but maybe it is too expensive?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset overwrites variables with different values but overlapping coordinates 620514214 | |
630692045 | https://github.com/pydata/xarray/issues/4077#issuecomment-630692045 | https://api.github.com/repos/pydata/xarray/issues/4077 | MDEyOklzc3VlQ29tbWVudDYzMDY5MjA0NQ== | malmans2 22245117 | 2020-05-19T09:08:59Z | 2020-05-19T09:08:59Z | CONTRIBUTOR | Got it, Thanks! Let me know if it is worth adding some checks. I'd be happy to work on it. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset overwrites variables with different values but overlapping coordinates 620514214 | |
580989966 | https://github.com/pydata/xarray/issues/3734#issuecomment-580989966 | https://api.github.com/repos/pydata/xarray/issues/3734 | MDEyOklzc3VlQ29tbWVudDU4MDk4OTk2Ng== | malmans2 22245117 | 2020-02-01T04:20:27Z | 2020-02-01T04:20:27Z | CONTRIBUTOR | This fixes the problem:
Does it make sense to add the following somewhere in _determine_cmap_params?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrong facet plots when all 2D arrays have one value only 557931967 | |
454439392 | https://github.com/pydata/xarray/issues/2662#issuecomment-454439392 | https://api.github.com/repos/pydata/xarray/issues/2662 | MDEyOklzc3VlQ29tbWVudDQ1NDQzOTM5Mg== | malmans2 22245117 | 2019-01-15T15:45:03Z | 2019-01-15T15:45:03Z | CONTRIBUTOR | I checked PR #2678 with the data that originated the issue and it fixes the problem! |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset in v.0.11.1 is very slow 397063221 | |
454086847 | https://github.com/pydata/xarray/issues/2662#issuecomment-454086847 | https://api.github.com/repos/pydata/xarray/issues/2662 | MDEyOklzc3VlQ29tbWVudDQ1NDA4Njg0Nw== | malmans2 22245117 | 2019-01-14T17:20:03Z | 2019-01-14T17:20:03Z | CONTRIBUTOR | I've created a little script to reproduce the problem.
@TomNicholas it looks like datasets are opened correctly. The problem arises when ```python import numpy as np import xarray as xr import os Tsize=100; T = np.arange(Tsize); Xsize=900; X = np.arange(Xsize); Ysize=800; Y = np.arange(Ysize) data = np.random.randn(Tsize, Xsize, Ysize) for i in range(2):
``` Fast if netCDFs are stored in one folder:
Slow if netCDFs are stored in several folders:
Fast if files containing different variables are opened separately, then merged:
|
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
open_mfdataset in v.0.11.1 is very slow 397063221 | |
390267025 | https://github.com/pydata/xarray/issues/2145#issuecomment-390267025 | https://api.github.com/repos/pydata/xarray/issues/2145 | MDEyOklzc3VlQ29tbWVudDM5MDI2NzAyNQ== | malmans2 22245117 | 2018-05-18T16:50:47Z | 2018-05-22T19:18:34Z | CONTRIBUTOR | In my previous comment I said that this would be useful for staggered grids, but then I realized that resample only operates on the time dimension. Anyway, here is my example: ```python import xarray as xr import pandas as pd import numpy as np Create coordinatestime = pd.date_range('1/1/2018', periods=365, freq='D') space = pd.np.arange(10) Create random variablesvar_withtime1 = np.random.randn(len(time), len(space)) var_withtime2 = np.random.randn(len(time), len(space)) var_timeless1 = np.random.randn(len(space)) var_timeless2 = np.random.randn(len(space)) Create datasetds = xr.Dataset({'var_withtime1': (['time', 'space'], var_withtime1), 'var_withtime2': (['time', 'space'], var_withtime2), 'var_timeless1': (['space'], var_timeless1), 'var_timeless2': (['space'], var_timeless2)}, coords={'time': (['time',], time), 'space': (['space',], space)}) Standard resample: this add the time dimension to the timeless variablesds_resampled = ds.resample(time='1M').mean() My workaround: this does not add the time dimension to the timeless variablesds_withtime = ds.drop([ var for var in ds.variables if not 'time' in ds[var].dims ]) ds_timeless = ds.drop([ var for var in ds.variables if 'time' in ds[var].dims ]) ds_workaround = xr.merge([ds_timeless, ds_withtime.resample(time='1M').mean()]) ``` Datasets: ```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Dataset.resample() adds time dimension to independant variables 323839238 | |
373694632 | https://github.com/pydata/xarray/issues/1985#issuecomment-373694632 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MzY5NDYzMg== | malmans2 22245117 | 2018-03-16T12:09:50Z | 2018-03-16T12:09:50Z | CONTRIBUTOR | Alright, I found the problem. I'm loading several variables from different files. All the variables have 1464 snapshots. However, one of the 3D variables has just one snapshot at a different time (I found a bag in my bash script to re-organize the raw data). When I load my dataset using .open_mfdataset, the time dimension has an extra snapshot (length is 1465). However, xarray doesn't like it and when I run functions such as to_netcdf it takes forever (no error). Thanks @fujiisoup for the help! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Load a small subset of data from a big dataset takes forever 304624171 | |
372570107 | https://github.com/pydata/xarray/issues/1985#issuecomment-372570107 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU3MDEwNw== | malmans2 22245117 | 2018-03-13T07:21:10Z | 2018-03-13T07:21:10Z | CONTRIBUTOR | I forgot to mention that I'm getting this warning: /home/idies/anaconda3/lib/python3.5/site-packages/dask/core.py:306: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison elif type_arg is type(key) and arg == key: However, I don't think it is relevant since I get the same warning when I'm able to run .to_netcdf() on the 3D variable. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Load a small subset of data from a big dataset takes forever 304624171 | |
372566304 | https://github.com/pydata/xarray/issues/1985#issuecomment-372566304 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU2NjMwNA== | malmans2 22245117 | 2018-03-13T07:01:51Z | 2018-03-13T07:01:51Z | CONTRIBUTOR | The problem occurs when I run the very last line, which is to_netcdf().
Right before, the dataset looks like this:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Load a small subset of data from a big dataset takes forever 304624171 | |
372558850 | https://github.com/pydata/xarray/issues/1985#issuecomment-372558850 | https://api.github.com/repos/pydata/xarray/issues/1985 | MDEyOklzc3VlQ29tbWVudDM3MjU1ODg1MA== | malmans2 22245117 | 2018-03-13T06:19:47Z | 2018-03-13T06:23:00Z | CONTRIBUTOR | I have the same issue if I don't copy the dataset. Here are the coordinates of my dataset:
``` I think somewhere I trigger the loading of the whole dataset. Otherwise, I don't understand why it works when I open just one month instead of the whole year. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Load a small subset of data from a big dataset takes forever 304624171 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 20