home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

30,000 rows sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

These facets timed out: author_association

user >30

  • shoyer 5,143
  • dcherian 2,724
  • max-sixty 2,489
  • keewis 1,740
  • jhamman 1,180
  • mathause 781
  • rabernat 731
  • fmaussion 690
  • TomNicholas 591
  • fujiisoup 539
  • benbovy 523
  • crusaderky 488
  • Illviljan 466
  • kmuehlbauer 457
  • headtr1ck 440
  • spencerkclark 438
  • stale[bot] 381
  • pep8speaks 361
  • mrocklin 284
  • andersy005 221
  • pwolfram 207
  • github-actions[bot] 206
  • alexamici 125
  • clarkfitzg 124
  • dopplershift 123
  • nbren12 116
  • ocefpaf 115
  • jbusecke 114
  • snowman2 106
  • hmaarrfk 104
  • …

issue >30

  • WIP: Zarr backend 103
  • CFTimeIndex 70
  • Explicit indexes in xarray's data-model (Future of MultiIndex) 68
  • ENH: use `dask.array.apply_gufunc` in `xr.apply_ufunc` 63
  • support for units 62
  • Multidimensional groupby 61
  • WIP: indexing with broadcasting 60
  • Feature Request: Hierarchical storage and processing in xarray 60
  • release v0.18.0 60
  • Integration with dask/distributed (xarray backend design) 59
  • Appending to zarr store 59
  • Fixes OS error arising from too many files open 54
  • How should xarray use/support sparse arrays? 54
  • Html repr 54
  • Hooks for XArray operations 53
  • cov() and corr() - finalization 52
  • implement interp() 51
  • Use pytorch as backend for xarrays 49
  • Use xarray.open_dataset() for password-protected Opendap files 48
  • open_mfdataset too many files 47
  • Add methods for combining variables of differing dimensionality 46
  • Explicit indexes 46
  • ENH: Scatter plots of one variable vs another 45
  • Add CRS/projection information to xarray objects 45
  • tests for arrays with units 45
  • merge scipy19 docs 45
  • 0.13.0 release 43
  • Implement interp for interpolating between chunks of data (dask) 42
  • xarray to and from iris 40
  • WIP: html repr 40
  • …
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1280906370 https://github.com/pydata/xarray/pull/7173#issuecomment-1280906370 https://api.github.com/repos/pydata/xarray/issues/7173 IC_kwDOAMm_X85MWRSC Illviljan 14371165 2022-10-17T13:57:47Z 2024-03-20T23:12:49Z MEMBER

Scatter vs. Lines:

```python ds = xr.tutorial.scatter_example_dataset(seed=42) hue_ = "y" x_ = "y" size_="y" z_ = "z" fig = plt.figure() ax = fig.add_subplot(1, 2, 1, projection='3d') ds.A.sel(w="one").plot.lines(x=x_, z=z_, hue=hue_, linewidth=size_, ax=ax) ax = fig.add_subplot(1, 2, 2, projection='3d') ds.A.sel(w="one").plot.scatter(x=x_, z=z_, hue=hue_, markersize=size_, ax=ax) ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add LineCollection plot 1410608825
285380106 https://github.com/pydata/xarray/issues/1303#issuecomment-285380106 https://api.github.com/repos/pydata/xarray/issues/1303 MDEyOklzc3VlQ29tbWVudDI4NTM4MDEwNg== rabernat 1197350 2017-03-09T15:18:18Z 2024-02-06T17:57:21Z MEMBER

Just wanted to link to a somewhat related discussion happening in brian-rose/climlab#50.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.core.variable.as_variable()` part of the public API? 213004586
1406463669 https://github.com/pydata/xarray/issues/7377#issuecomment-1406463669 https://api.github.com/repos/pydata/xarray/issues/7377 IC_kwDOAMm_X85T1O61 maawoo 56583917 2023-01-27T12:45:10Z 2024-01-03T08:41:41Z CONTRIBUTOR

Hi all, I just created a simple workaround, which might be useful for others:
https://gist.github.com/maawoo/0b34d371c3cc1960a1589ccaded868c2

It uses the _nan_quantile method of xclim and works fine for my applications. Here is a quick comparison using the same example data as in my initial post:

EDIT: I've updated the code to use numbagg instead of xclim.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Aggregating a dimension using the Quantiles method with `skipna=True` is very slow 1497031605
732361140 https://github.com/pydata/xarray/issues/4601#issuecomment-732361140 https://api.github.com/repos/pydata/xarray/issues/4601 MDEyOklzc3VlQ29tbWVudDczMjM2MTE0MA== max-sixty 5635139 2020-11-23T19:00:30Z 2023-09-24T19:44:17Z MEMBER

Great observation @mathause . I think there are two parts of this: - Do we want other libraries which do da.longitude to raise a mypy error? That may be a tradeoff with raising the true error on da.isel - How do we design the type hierarchy? We could add methods to DataWithCoords or add some Dataset_Or_DataArray-like type

Having methods like isel typed would be a win I think

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't type check __getattr__? 748684119
732472373 https://github.com/pydata/xarray/issues/4601#issuecomment-732472373 https://api.github.com/repos/pydata/xarray/issues/4601 MDEyOklzc3VlQ29tbWVudDczMjQ3MjM3Mw== max-sixty 5635139 2020-11-23T22:51:38Z 2023-09-24T19:36:02Z MEMBER

Good point re accessors, I hadn't considered those. So sounds like raising an error on da.isel isn't possible regardless...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Don't type check __getattr__? 748684119
1259228475 https://github.com/pydata/xarray/issues/6293#issuecomment-1259228475 https://api.github.com/repos/pydata/xarray/issues/6293 IC_kwDOAMm_X85LDk07 benbovy 4160723 2022-09-27T09:22:04Z 2023-08-24T11:42:53Z MEMBER

Following thoughts and discussions in various issues (e.g., #6836), I'd like to suggest another section to the ones in the top comment:

Deprecate pandas.MultiIndex special cases in Xarray

  • remove the multi-index “dimension” coordinate (tuple elements)
  • do not automatically promote pandas.MultiIndex objects as dimension + level coordinates, e.g., like in xr.Dataset(coords={“x”: pd_midx}) but instead treat it as a single duck-array.
  • do not accept pandas.MultiIndex as dim argument in xarray.concat() (#7148)
  • remove obj.to_index() for all xarray objects?
  • (EDIT) remove Dataset.reset_index() and DataArray.reset_index()

They are source of many problems and complexities in Xarray internals (many regressions reported since the index refactor were related to those special cases) and I'm not sure that the value they add is really worth the trouble. Also, in the long term the special treatment of PandasMultiIndex vs. other Xarray multi-indexes may add some confusion.

Some of those features are widely used (e.g., the creation of Dataset / DataArray from pandas multi-indexes is used in many places in unit tests), so we would need convenient alternatives and a smooth transition.

{
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Explicit indexes: next steps 1148021907
585452791 https://github.com/pydata/xarray/issues/3762#issuecomment-585452791 https://api.github.com/repos/pydata/xarray/issues/3762 MDEyOklzc3VlQ29tbWVudDU4NTQ1Mjc5MQ== bjcosta 6491058 2020-02-12T22:34:56Z 2023-08-02T19:50:42Z NONE

Actually it looks like this example is relevant: https://xarray.pydata.org/en/stable/examples/apply_ufunc_vectorize_1d.html

Hi dcherian,

I had a look at the apply_ufunc() example you linked and have re-implemented my code. The example helped me understand apply_ufunc() usage better but is very different from my use case and I still am unable to parallelize using dask.

The key difference is apply_ufunc() as described in the docs and the example, applys a function to a vector of data of a single type (in the example case it is air temperature across the 3 dimensions lat,long,time).

Where as I need to apply an operation using heterogeneous data (depth_bins, lower_limit, upper_limit) over a single dimension (time) to produce a new array of depths over time (which is why I tried groupby/map initially).

Anyhow, I have an implementation using apply_ufunc() that works using xarray and numpy arrays with apply_ufunc(), but when I try to parallelize it using dask my ufunc is called with empty arrays by xarray and it fails.

I.e. You can see when running the code below it logs the following when entering the ufunc: args: (array([], shape=(0, 0), dtype=int32), array([], dtype=int32), array([], dtype=int32), array([], dtype=int32)), kwargs: {}

I was expecting this to be called once for each chunk with 1000 items for each array.

Have I done something wrong in this work-around for the groupby/map code?

Thanks, Brendon

```python import sys import math import logging import dask import xarray import numpy

logger = logging.getLogger('main')

if name == 'main': logging.basicConfig( stream=sys.stdout, format='%(asctime)s %(levelname)-8s %(message)s', level=logging.INFO, datefmt='%Y-%m-%d %H:%M:%S')

logger.info('Starting dask client')
client = dask.distributed.Client()

SIZE = 3000
SONAR_BINS = 2000
upper_limit = numpy.random.randint(0, 10, (SIZE))
lower_limit = numpy.random.randint(20, 30, (SIZE))
sonar_data = numpy.random.randint(0, 255, (SIZE, SONAR_BINS))

time = range(0, SIZE)

channel = xarray.Dataset({
        'upper_limit': (['time'], upper_limit, {'units': 'depth meters'}),
        'lower_limit': (['time'],  lower_limit, {'units': 'depth meters'}),
        'data': (['time', 'depth_bin'], sonar_data, {'units': 'amplitude'}),
    },
    coords={
        'time': (['time'], time),
        'depth_bin': (['depth_bin'], range(0,SONAR_BINS)),
    })

logger.info('get overall min/max radar range we want to normalize to called the adjusted range')
adjusted_min, adjusted_max = channel.upper_limit.min().values.item(), channel.lower_limit.max().values.item()
adjusted_min = math.floor(adjusted_min)
adjusted_max = math.ceil(adjusted_max)
logger.info('adjusted_min: %s, adjusted_max: %s', adjusted_min, adjusted_max)

bin_count = len(channel.depth_bin)
logger.info('bin_count: %s', bin_count)

adjusted_depth_per_bin = (adjusted_max - adjusted_min) / bin_count
logger.info('adjusted_depth_per_bin: %s', adjusted_depth_per_bin)

adjusted_bin_depths = [adjusted_min + (j * adjusted_depth_per_bin) for j in range(0, bin_count)]
logger.info('adjusted_bin_depths[0]: %s ... [-1]: %s', adjusted_bin_depths[0], adjusted_bin_depths[-1])

def InterpSingle(unadjusted_depth_amplitudes, unadjusted_min, unadjusted_max, time):

    if (time % 1000) == 0:
        total = len(channel.time)
        perc = 100.0 * time / total
        logger.info('%s : %s of %s', perc, time, total)

    unadjusted_depth_per_bin = (unadjusted_max - unadjusted_min) / bin_count

    min_index = (adjusted_min - unadjusted_min) / unadjusted_depth_per_bin
    max_index = ((adjusted_min + ((bin_count - 1) * adjusted_depth_per_bin)) - unadjusted_min) / unadjusted_depth_per_bin
    index_mapping = numpy.linspace(min_index, max_index, bin_count)
    adjusted_depth_amplitudes = numpy.interp(index_mapping, range(0, len(unadjusted_depth_amplitudes)), unadjusted_depth_amplitudes, left=0, right=0)
    return adjusted_depth_amplitudes

def Interp(*args, **kwargs):
    logger.info('args: %s, kwargs: %s', args, kwargs)

    data = args[0]
    upper_limit = args[1]
    lower_limit = args[2]
    time = args[3]
    #logger.info('data: %s len(data[0]): %s data.shape: %s', data, len(data[0]), data.shape)

    adjusted = []
    for i in range(0, len(upper_limit)):
        d = data[i]
        u = upper_limit[i]
        l = lower_limit[i]
        t = time[i]
        result = InterpSingle(d, u, l, t)
        adjusted.append(result)

    #logger.info('adjusted: %s', adjusted)
    return adjusted



channel = channel.chunk({'time':1000}) # Comment this line out to disable dask
#logger.info('Channel: %s', channel)
#logger.info('shape of data: %s len(data[0]): %s', channel.data.shape, len(channel.data[0]))
m2_lazy = xarray.apply_ufunc(
    Interp, 
    channel.data, 
    channel.upper_limit, 
    channel.lower_limit, 
    channel.time, 
    input_core_dims=[['depth_bin'], [], [], []], 
    output_core_dims=[['depth']],
    dask='parallelized',   # Comment this line out to disable dask
    output_dtypes=[numpy.dtype(numpy.int32)],   # Comment this line out to disable dask
    output_sizes={'depth':len(adjusted_bin_depths)},   # Comment this line out to disable dask
    )
m2 = m2_lazy.compute() # Comment this line out to disable dask
m2 = m2.assign_coords({'depth':adjusted_bin_depths})

logger.info('Closing dask client')
client.close()

logger.info('Exit process')

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray groupby/map fails to parallelize 561921094
1078439763 https://github.com/pydata/xarray/issues/2233#issuecomment-1078439763 https://api.github.com/repos/pydata/xarray/issues/2233 IC_kwDOAMm_X85AR69T rsignell-usgs 1872600 2022-03-24T22:26:07Z 2023-07-16T15:13:39Z NONE

https://github.com/pydata/xarray/issues/2233#issuecomment-397602084 Would the new xarray index/coordinate internal refactoring now allow us to address this issue?

cc @kthyng

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Problem opening unstructured grid ocean forecasts with 4D vertical coordinates 332471780
1578777785 https://github.com/pydata/xarray/pull/7862#issuecomment-1578777785 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85eGjy5 headtr1ck 43316012 2023-06-06T13:31:34Z 2023-06-06T13:31:34Z COLLABORATOR

If you want you can leave a comment But the mypy CI should fail on unused ignores, so we will notice :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1578775636 https://github.com/pydata/xarray/pull/7862#issuecomment-1578775636 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85eGjRU kmuehlbauer 5821660 2023-06-06T13:30:15Z 2023-06-06T13:30:15Z MEMBER

Might be worth an issue over at numpy with the example from the test.

numpy/numpy#23886

The issue is already resolved over at numpy which is really great! It was also marked as backport. @headtr1ck How are these issues resolved currently or how do we track removing the ignore?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1578248748 https://github.com/pydata/xarray/pull/7862#issuecomment-1578248748 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85eEios kmuehlbauer 5821660 2023-06-06T09:04:39Z 2023-06-06T09:04:39Z MEMBER

Might be worth an issue over at numpy with the example from the test.

https://github.com/numpy/numpy/issues/23886

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1577838062 https://github.com/pydata/xarray/pull/7888#issuecomment-1577838062 https://api.github.com/repos/pydata/xarray/issues/7888 IC_kwDOAMm_X85eC-Xu dcherian 2448579 2023-06-06T03:20:39Z 2023-06-06T03:20:39Z MEMBER

Should we delete the cfgrib example instead?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add cfgrib,ipywidgets to doc env 1736542260
1577827466 https://github.com/pydata/xarray/issues/7841#issuecomment-1577827466 https://api.github.com/repos/pydata/xarray/issues/7841 IC_kwDOAMm_X85eC7yK dcherian 2448579 2023-06-06T03:05:47Z 2023-06-06T03:05:47Z MEMBER

https://github.com/corteva/rioxarray/issues/676

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray docs showing tracebacks instead of plots 1709215291
1577528999 https://github.com/pydata/xarray/issues/7894#issuecomment-1577528999 https://api.github.com/repos/pydata/xarray/issues/7894 IC_kwDOAMm_X85eBy6n chfite 59711987 2023-06-05T21:59:45Z 2023-06-05T21:59:45Z NONE

```

input array

array = xr.DataArray([1,3,6,np.nan,19,20,13], dims=['time'], coords=[pd.date_range('2023-06-05 00:00','2023-06-05 06:00',freq='H')])

array xarray.DataArray(time: 7 array([ 1., 3., 6., nan, 19., 20., 13.]) Coordinates: time (time) datetime64[ns] 2023-06-05 ... 2023-06-05T06: Indexes: (1) Attributes: (0)

however the integrated value ends up as a NaN

array.integrate('time') xarray.DataArray array(nan) Coordinates: (0) Indexes: (0) Attributes: (0)

if one still wanted to know the integrated values for where there were values it would essentially by like integrating the separate chunks for where the valid values existed

first chunk

array.isel(time=slice(0,3)).integrate('time') xarray.DataArray array(2.34e+13) Coordinates: (0) Indexes: (0) Attributes: (0)

second chunk

array.isel(time=slice(4,7)).integrate('time') xarray.DataArray array(1.296e+14) Coordinates: (0) Indexes: (0) Attributes: (0)

and then the sum would be the fully integrated area

``` @dcherian I essentially was wondering whether it was possible for a skipna argument or some kind of NaN handling to be implemented that would allow users to avoid integrating in chunks due to the presence of NaNs. I do not work in dev so I would not know how to implement this, but I thought I'd see if others had thoughts.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? 1742035781
1577474914 https://github.com/pydata/xarray/issues/7894#issuecomment-1577474914 https://api.github.com/repos/pydata/xarray/issues/7894 IC_kwDOAMm_X85eBlti dcherian 2448579 2023-06-05T21:05:47Z 2023-06-05T21:05:57Z MEMBER

but is it not possible for it to calculate the integrated values where there were regular values?

@chfite Can you provide an example of what you would want it to do please

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? 1742035781
1577364529 https://github.com/pydata/xarray/pull/7891#issuecomment-1577364529 https://api.github.com/repos/pydata/xarray/issues/7891 IC_kwDOAMm_X85eBKwx headtr1ck 43316012 2023-06-05T19:35:33Z 2023-06-05T19:35:33Z COLLABORATOR

You could use DataArray.round to round to significant decimals.

Better to use a tolerance in the assterion testing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add errors option to curvefit 1740268634
1576080083 https://github.com/pydata/xarray/issues/7866#issuecomment-1576080083 https://api.github.com/repos/pydata/xarray/issues/7866 IC_kwDOAMm_X85d8RLT kmuehlbauer 5821660 2023-06-05T05:45:30Z 2023-06-05T05:45:30Z MEMBER

@vrishk Sorry for the delay here and thanks for bringing this to attention. We now have at least two requests which might move this forward (moving ensure_dtype_not_object into the backends). But this would need some discussion first, how to do this.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Enable object_codec in zarr backend 1720924071
1576074048 https://github.com/pydata/xarray/issues/7892#issuecomment-1576074048 https://api.github.com/repos/pydata/xarray/issues/7892 IC_kwDOAMm_X85d8PtA kmuehlbauer 5821660 2023-06-05T05:37:32Z 2023-06-05T05:37:32Z MEMBER

@mktippett Thanks for raising this. The issue should be cleared after #7888 is merged.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  GRIB Data Example is broken 1740685974
1575756825 https://github.com/pydata/xarray/pull/7891#issuecomment-1575756825 https://api.github.com/repos/pydata/xarray/issues/7891 IC_kwDOAMm_X85d7CQZ Illviljan 14371165 2023-06-04T22:29:18Z 2023-06-04T22:29:18Z MEMBER

You could use DataArray.round to round to significant decimals.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add errors option to curvefit 1740268634
1575677244 https://github.com/pydata/xarray/pull/7891#issuecomment-1575677244 https://api.github.com/repos/pydata/xarray/issues/7891 IC_kwDOAMm_X85d6u08 mgunyho 20118130 2023-06-04T19:08:20Z 2023-06-04T19:11:44Z CONTRIBUTOR

Oh no, the doctest failure is because the test is flaky, this was introduced by me in #7821, see here: https://github.com/pydata/xarray/pull/7821#issuecomment-1537142237 and here: https://github.com/pydata/xarray/pull/7821/commits/a0e6659ca01188378f29a35b418d6f9e2b889d2e. I'll submit another patch to fix it soon, although I'm not sure how. If you have any tips to avoid this problem, let me know.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add errors option to curvefit 1740268634
1575671448 https://github.com/pydata/xarray/pull/7889#issuecomment-1575671448 https://api.github.com/repos/pydata/xarray/issues/7889 IC_kwDOAMm_X85d6taY andersy005 13301940 2023-06-04T18:46:09Z 2023-06-04T18:46:09Z MEMBER

Thank you @keewis

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  retire the TestPyPI workflow 1738586208
1575492166 https://github.com/pydata/xarray/pull/6515#issuecomment-1575492166 https://api.github.com/repos/pydata/xarray/issues/6515 IC_kwDOAMm_X85d6BpG mgunyho 20118130 2023-06-04T09:43:50Z 2023-06-04T09:43:50Z CONTRIBUTOR

Hi! I would also like to see this implemented, so I rebased this branch and added a test in #7891.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add allow_failures flag to Dataset.curve_fit 1215946244
1574365471 https://github.com/pydata/xarray/issues/7890#issuecomment-1574365471 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1ukf dcherian 2448579 2023-06-02T22:04:33Z 2023-06-02T22:04:33Z MEMBER

I think the only other one is dask, which should also work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1574338418 https://github.com/pydata/xarray/issues/7890#issuecomment-1574338418 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1n9y negin513 17344536 2023-06-02T21:30:05Z 2023-06-02T21:30:05Z CONTRIBUTOR

@dcherian : agreed! But I am afraid it might break other components. Although numpy seems to be able to handle both tuple and list in normalize_axis_tuple and I cannot see any other issues rising from this: https://github.com/numpy/numpy/blob/f67467c21a1797becde3097661996f60df4080ff/numpy/core/numeric.py#L1328

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1574331034 https://github.com/pydata/xarray/issues/7890#issuecomment-1574331034 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1mKa dcherian 2448579 2023-06-02T21:23:25Z 2023-06-02T21:27:06Z MEMBER

This seems like a real easy fix? axis = tuple(self.get_axis_num(d) for d in dim)

EDIT: the Array API seems to type axis as Optional[Union[int, Tuple[int, ...]]] pretty consistently, so it seems like we should always pass tuples down to the array library

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1574324606 https://github.com/pydata/xarray/issues/7890#issuecomment-1574324606 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1kl- welcome[bot] 30606887 2023-06-02T21:16:03Z 2023-06-02T21:16:03Z NONE

Thanks for opening your first issue here at xarray! Be sure to follow the issue template! If you have an idea for a solution, we would really welcome a Pull Request with proposed changes. See the Contributing Guide for more. It may take us a while to respond here, but we really value your contribution. Contributors like you help make xarray better. Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1574278204 https://github.com/pydata/xarray/pull/7862#issuecomment-1574278204 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85d1ZQ8 headtr1ck 43316012 2023-06-02T20:27:46Z 2023-06-02T20:28:29Z COLLABORATOR

This seems to be a numpy issue, mypy thinks that you cannot call np.dtype like you do.

Might be worth an issue over at numpy with the example from the test.

For now we can simply ignore this error.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1574264842 https://github.com/pydata/xarray/pull/7862#issuecomment-1574264842 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85d1WAK dcherian 2448579 2023-06-02T20:14:33Z 2023-06-02T20:14:48Z MEMBER

xarray/tests/test_coding_strings.py:36: error: No overload variant of "dtype" matches argument types "str", "Dict[str, Type[str]]" [call-overload]

cc @Illviljan @headtr1ck

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1573764660 https://github.com/pydata/xarray/pull/7862#issuecomment-1573764660 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dzb40 tomwhite 85085 2023-06-02T13:44:43Z 2023-06-02T13:44:43Z CONTRIBUTOR

@kmuehlbauer thanks for adding tests! I'm not sure what the mypy error is either, I'm afraid...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1572412059 https://github.com/pydata/xarray/pull/7880#issuecomment-1572412059 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85duRqb shoyer 1217238 2023-06-01T16:51:07Z 2023-06-01T17:10:49Z MEMBER

Given that this error only is caused when Python is shutting down, which is exactly a case in which we do not need to clean up open file objects, maybe we can remove the __del__ instead?

Something like: ```python import atexit

@atexit.register def _remove_del_method(): # We don't need to close unclosed files at program exit, # and may not be able to do, because Python is cleaning up # imports. del CachingFileManager.del ```

(I have not tested this!)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1572437423 https://github.com/pydata/xarray/pull/7880#issuecomment-1572437423 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85duX2v keewis 14808389 2023-06-01T17:01:56Z 2023-06-01T17:06:06Z MEMBER

that appears to work on both my laptop and my local HPC, and is arguably a lot easier to implement / understand as we don't need to make sure all the globals we use are still available (which in this case would be acquire, OPTIONS, warnings, and RuntimeWarning).

Edit: let me change the PR to do that instead

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1572384036 https://github.com/pydata/xarray/pull/7880#issuecomment-1572384036 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85duK0k keewis 14808389 2023-06-01T16:38:23Z 2023-06-01T16:54:08Z MEMBER

Have you verfied that this fixes things at least on your machine?

I thought I did, but apparently something changed: right now it fails because OPTIONS is not available anymore, so we might have to add a reference to that, as well. Additionally, for the warning we use warnings.warn and RuntimeWarning that might also have disappeared already (but those are standard library / builtins, so hopefully not?)

In any case, you can verify this, too: - create a new environment using mamba create -n test python=3.11 numpy pandas packaging pooch netcdf4 and activate it - run pip install 'dask[array]' to install dask without distributed (this appears to make a difference for me, not sure if that's the same elsewhere) - editable-install xarray so we can easily switch between branches - run python -c 'import xarray as xr; ds = xr.tutorial.open_dataset("air_temperature", chunks={})'

This should print an error for main and shouldn't with this branch (confirmed on my local HPC).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1572363440 https://github.com/pydata/xarray/pull/7880#issuecomment-1572363440 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85duFyw keewis 14808389 2023-06-01T16:26:42Z 2023-06-01T16:26:42Z MEMBER

the issue is that this doesn't occur on normal garbage collection but only on interpreter shutdown. So really, I don't think we have any way to test this using pytest as that itself is written in python (unless of course we can make use of sub-interpreters, but that might be more trouble than it's worth).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1572359754 https://github.com/pydata/xarray/pull/7880#issuecomment-1572359754 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85duE5K headtr1ck 43316012 2023-06-01T16:23:45Z 2023-06-01T16:23:45Z COLLABORATOR

Maybe you can add a test that creates a cachingFilemanager object, then deletes it, then run gc.collect() and check somehow if it works? But no idea how pytest interferes with this or how you can ensure that there are no references to the module anymore?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1572357330 https://github.com/pydata/xarray/pull/7877#issuecomment-1572357330 https://api.github.com/repos/pydata/xarray/issues/7877 IC_kwDOAMm_X85duETS dependabot[bot] 49699333 2023-06-01T16:21:59Z 2023-06-01T16:21:59Z CONTRIBUTOR

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump mamba-org/provision-with-micromamba from 15 to 16 1730190019
1572350143 https://github.com/pydata/xarray/pull/7880#issuecomment-1572350143 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85duCi_ shoyer 1217238 2023-06-01T16:16:40Z 2023-06-01T16:16:40Z MEMBER

I agree that this seems very hard to test!

Have you verfied that this fixes things at least on your machine?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1572306481 https://github.com/pydata/xarray/pull/7883#issuecomment-1572306481 https://api.github.com/repos/pydata/xarray/issues/7883 IC_kwDOAMm_X85dt34x dcherian 2448579 2023-06-01T15:49:42Z 2023-06-01T15:49:42Z MEMBER

Hmmm ndim is in the array api so potentially we could just update the test.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid one call to len when getting ndim of Variables 1731320789
1460873349 https://github.com/pydata/xarray/issues/7456#issuecomment-1460873349 https://api.github.com/repos/pydata/xarray/issues/7456 IC_kwDOAMm_X85XEyiF Karimat22 127195910 2023-03-08T21:04:05Z 2023-06-01T15:42:44Z NONE

The xr.Dataset.expand_dims() method can be used to add new dimensions to a dataset. The axis parameter is used to specify where to insert the new dimension in the dataset. However, it's worth noting that the axis parameter only works when expanding along a 1D coordinate, not when expanding along a multi-dimensional array.

Here's an example to illustrate how to use the axis parameter to expand a dataset along a 1D coordinate:

import xarray as xr

create a sample dataset

data = xr.DataArray([[1, 2], [3, 4]], dims=('x', 'y')) ds = xr.Dataset({'foo': data})

add a new dimension along the 'x' coordinate using the 'axis' parameter

ds_expanded = ds.expand_dims({'z': [1]}, axis='x')

In this example, we create a 2D array with dimensions x and y, and then add a new dimension along the x coordinate using the axis='x' parameter.

However, if you try to use the axis parameter to expand a dataset along a multi-dimensional array, you may encounter an error. This is because expanding along a multi-dimensional array would result in a dataset with non-unique dimension names, which is not allowed in xarray.

Here's an example to illustrate this issue:

import xarray as xr

create a sample dataset with a 2D array

data = xr.DataArray([[1, 2], [3, 4]], dims=('x', 'y')) ds = xr.Dataset({'foo': data})

add a new dimension along the 'x' and 'y' coordinates using the 'axis' parameter

ds_expanded = ds.expand_dims({'z': [1]}, axis=('x', 'y'))

In this example, we try to use the axis=('x', 'y') parameter to add a new dimension along both the x and y coordinates. However, this results in a ValueError because the resulting dataset would have non-unique dimension names.

To add a new dimension along a multi-dimensional array, you can instead use the xr.concat() function to concatenate the dataset with a new data array along the desired dimension:

import xarray as xr

create a sample dataset with a 2D array

data = xr.DataArray([[1, 2], [3, 4]], dims=('x', 'y')) ds = xr.Dataset({'foo': data})

add a new dimension along the 'x' and 'y' coordinates using xr.concat

ds_expanded = xr.concat([ds, xr.DataArray([1], dims=('z'))], dim='z')

In this example, we use the xr.concat() function to concatenate the original dataset with a new data array that has a single value along the new dimension z. The dim='z' parameter is used to specify that the new dimension should be named z.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.DataSet.expand_dims axis option doesn't work 1548355645
1572276996 https://github.com/pydata/xarray/issues/7884#issuecomment-1572276996 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dtwsE dcherian 2448579 2023-06-01T15:30:26Z 2023-06-01T15:30:26Z MEMBER

Please ask over at the cfgrib repo. But it does look like a bad environment / bad install.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1572259965 https://github.com/pydata/xarray/pull/7670#issuecomment-1572259965 https://api.github.com/repos/pydata/xarray/issues/7670 IC_kwDOAMm_X85dtsh9 headtr1ck 43316012 2023-06-01T15:22:33Z 2023-06-01T15:22:33Z COLLABORATOR

The cfgrib notebook in the documentation is broken. I guess it's related to this PR. See: https://docs.xarray.dev/en/stable/examples/ERA5-GRIB-example.html

Same problem with https://github.com/pydata/xarray/issues/7841

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Delete built-in cfgrib backend 1639732867
1572174061 https://github.com/pydata/xarray/pull/7670#issuecomment-1572174061 https://api.github.com/repos/pydata/xarray/issues/7670 IC_kwDOAMm_X85dtXjt malmans2 22245117 2023-06-01T14:34:44Z 2023-06-01T14:34:44Z CONTRIBUTOR

The cfgrib notebook in the documentation is broken. I guess it's related to this PR. See: https://docs.xarray.dev/en/stable/examples/ERA5-GRIB-example.html

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Delete built-in cfgrib backend 1639732867
1572021301 https://github.com/pydata/xarray/pull/7862#issuecomment-1572021301 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dsyQ1 kmuehlbauer 5821660 2023-06-01T13:06:32Z 2023-06-01T13:06:32Z MEMBER

@tomwhite I've added tests to check the backend code for vlen string dtype metadadata. Also had to add specific check for the h5py vlen string metadata. I think we've covered everything for the proposed change to allow empty vlen strings dtype metadata.

I'm looking at the mypy error and do not have the slightest clue what and where to change. Any help appreciated.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1571698855 https://github.com/pydata/xarray/issues/7887#issuecomment-1571698855 https://api.github.com/repos/pydata/xarray/issues/7887 IC_kwDOAMm_X85drjin keewis 14808389 2023-06-01T09:39:09Z 2023-06-01T09:39:09Z MEMBER

this is #7879 (and thus probably #7079). I suspect our locks are not working properly, but in any case we really should try to fix this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ⚠️ Nightly upstream-dev CI failed ⚠️ 1735219849
1571684058 https://github.com/pydata/xarray/issues/7884#issuecomment-1571684058 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85drf7a leamhowe 48015835 2023-06-01T09:29:13Z 2023-06-01T09:29:13Z NONE

Just running import cfgrib returns the error below. But this works for you?

Is this an issue that cannot be solved?

Thanks again for all your help!

```

ModuleNotFoundError Traceback (most recent call last) Cell In[32], line 1 ----> 1 import cfgrib

File ~\Anaconda3\envs\doom_test\lib\site-packages\cfgrib__init__.py:20 18 # cfgrib core API depends on the ECMWF ecCodes C-library only 19 from .abc import Field, Fieldset, Index, MappingFieldset ---> 20 from .cfmessage import COMPUTED_KEYS 21 from .dataset import ( 22 Dataset, 23 DatasetBuildError, (...) 27 open_from_index, 28 ) 29 from .messages import FieldsetIndex, FileStream, Message

File ~\Anaconda3\envs\doom_test\lib\site-packages\cfgrib\cfmessage.py:29 26 import attr 27 import numpy as np ---> 29 from . import abc, messages 31 LOG = logging.getLogger(name) 33 # taken from eccodes stepUnits.table

File ~\Anaconda3\envs\doom_test\lib\site-packages\cfgrib\messages.py:28 25 import typing as T 27 import attr ---> 28 import eccodes # type: ignore 29 import numpy as np 31 from . import abc

File ~\Anaconda3\envs\doom_test\lib\site-packages\eccodes__init__.py:13 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 10 # 11 # ---> 13 from .eccodes import * # noqa 14 from .highlevel import *

File ~\Anaconda3\envs\doom_test\lib\site-packages\eccodes\eccodes.py:12 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 10 # 11 # ---> 12 from gribapi import ( 13 CODES_PRODUCT_ANY, 14 CODES_PRODUCT_BUFR, 15 CODES_PRODUCT_GRIB, 16 CODES_PRODUCT_GTS, 17 CODES_PRODUCT_METAR, 18 ) 19 from gribapi import GRIB_CHECK as CODES_CHECK 20 from gribapi import GRIB_MISSING_DOUBLE as CODES_MISSING_DOUBLE

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi__init__.py:13 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 10 # 11 # ---> 13 from .gribapi import * # noqa 14 from .gribapi import version, lib 16 # The minimum recommended version for the ecCodes package

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi\gribapi.py:34 30 from functools import wraps 32 import numpy as np ---> 34 from gribapi.errors import GribInternalError 36 from . import errors 37 from .bindings import ENC

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi\errors.py:16 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 9 # does it submit to any jurisdiction. 10 # 12 """ 13 Exception class hierarchy 14 """ ---> 16 from .bindings import ENC, ffi, lib 19 class GribInternalError(Exception): 20 """ 21 @brief Wrap errors coming from the C API in a Python exception object. 22 23 Base class for all exceptions 24 """

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi\bindings.py:40 37 # default encoding for ecCodes strings 38 ENC = "ascii" ---> 40 ffi = cffi.FFI() 41 CDEF = pkgutil.get_data(name, "grib_api.h") 42 CDEF += pkgutil.get_data(name, "eccodes.h")

File ~\Anaconda3\envs\doom_test\lib\site-packages\cffi\api.py:48, in FFI.init(self, backend) 42 """Create an FFI instance. The 'backend' argument is used to 43 select a non-default backend, mostly for tests. 44 """ 45 if backend is None: 46 # You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with 47 # _cffi_backend.so compiled. ---> 48 import _cffi_backend as backend 49 from . import version 50 if backend.version != version: 51 # bad version! Try to be as explicit as possible.

ModuleNotFoundError: No module named '_cffi_backend' ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1570587416 https://github.com/pydata/xarray/issues/7884#issuecomment-1570587416 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dnUMY keewis 14808389 2023-05-31T16:56:16Z 2023-05-31T16:58:23Z MEMBER

No module named '_cffi_backend'

Does simply import cfgrib work for you? I suspect it doesn't, which would explain the issue. It's unfortunate that the error is rewritten to "unknown engine", but I'm not sure how we would detect that it's a dependency that fails.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1570518243 https://github.com/pydata/xarray/issues/7884#issuecomment-1570518243 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dnDTj leamhowe 48015835 2023-05-31T16:06:55Z 2023-05-31T16:06:55Z NONE

The version of cfgrib i have is: cfgrib=0.9.10.4

I tried to update my own environment as you advised and have this error:

```

ModuleNotFoundError Traceback (most recent call last) File ~\Anaconda3\envs\doom_test\lib\site-packages\xarray\tutorial.py:151, in open_dataset(name, cache, cache_dir, engine, **kws) 150 try: --> 151 import cfgrib # noqa 152 except ImportError as e:

File ~\Anaconda3\envs\doom_test\lib\site-packages\cfgrib__init__.py:20 19 from .abc import Field, Fieldset, Index, MappingFieldset ---> 20 from .cfmessage import COMPUTED_KEYS 21 from .dataset import ( 22 Dataset, 23 DatasetBuildError, (...) 27 open_from_index, 28 )

File ~\Anaconda3\envs\doom_test\lib\site-packages\cfgrib\cfmessage.py:29 27 import numpy as np ---> 29 from . import abc, messages 31 LOG = logging.getLogger(name)

File ~\Anaconda3\envs\doom_test\lib\site-packages\cfgrib\messages.py:28 27 import attr ---> 28 import eccodes # type: ignore 29 import numpy as np

File ~\Anaconda3\envs\doom_test\lib\site-packages\eccodes__init__.py:13 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 10 # 11 # ---> 13 from .eccodes import * # noqa 14 from .highlevel import *

File ~\Anaconda3\envs\doom_test\lib\site-packages\eccodes\eccodes.py:12 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 10 # 11 # ---> 12 from gribapi import ( 13 CODES_PRODUCT_ANY, 14 CODES_PRODUCT_BUFR, 15 CODES_PRODUCT_GRIB, 16 CODES_PRODUCT_GTS, 17 CODES_PRODUCT_METAR, 18 ) 19 from gribapi import GRIB_CHECK as CODES_CHECK

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi__init__.py:13 1 # 2 # (C) Copyright 2017- ECMWF. 3 # (...) 10 # 11 # ---> 13 from .gribapi import * # noqa 14 from .gribapi import version, lib

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi\gribapi.py:34 32 import numpy as np ---> 34 from gribapi.errors import GribInternalError 36 from . import errors

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi\errors.py:16 12 """ 13 Exception class hierarchy 14 """ ---> 16 from .bindings import ENC, ffi, lib 19 class GribInternalError(Exception):

File ~\Anaconda3\envs\doom_test\lib\site-packages\gribapi\bindings.py:40 38 ENC = "ascii" ---> 40 ffi = cffi.FFI() 41 CDEF = pkgutil.get_data(name, "grib_api.h")

File ~\Anaconda3\envs\doom_test\lib\site-packages\cffi\api.py:48, in FFI.init(self, backend) 45 if backend is None: 46 # You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with 47 # _cffi_backend.so compiled. ---> 48 import _cffi_backend as backend 49 from . import version

ModuleNotFoundError: No module named '_cffi_backend'

The above exception was the direct cause of the following exception:

ImportError Traceback (most recent call last) Cell In[30], line 2 1 import xarray as xr ----> 2 xr.tutorial.open_dataset("era5-2mt-2019-03-uk.grib")

File ~\Anaconda3\envs\doom_test\lib\site-packages\xarray\tutorial.py:153, in open_dataset(name, cache, cache_dir, engine, **kws) 151 import cfgrib # noqa 152 except ImportError as e: --> 153 raise ImportError( 154 "Reading this tutorial dataset requires the cfgrib package." 155 ) from e 157 url = f"{base_url}/raw/{version}/{path.name}" 159 # retrieve the file

ImportError: Reading this tutorial dataset requires the cfgrib package. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1570164833 https://github.com/pydata/xarray/pull/7821#issuecomment-1570164833 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85dltBh Illviljan 14371165 2023-05-31T12:43:30Z 2023-05-31T12:43:30Z MEMBER

Thanks @mgunyho !

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1568704895 https://github.com/pydata/xarray/pull/7876#issuecomment-1568704895 https://api.github.com/repos/pydata/xarray/issues/7876 IC_kwDOAMm_X85dgIl_ tomvothecoder 25624127 2023-05-30T16:09:17Z 2023-05-30T20:59:48Z CONTRIBUTOR

Thanks you @keewis and @Illviljan! I made comment to deprecate cdms2 in xarray in another issue/PR last year and didn't get around to it. I linked this PR in an xCDAT discussion post here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate the `cdms2` conversion methods 1729709527
1569021273 https://github.com/pydata/xarray/issues/7884#issuecomment-1569021273 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dhV1Z keewis 14808389 2023-05-30T20:07:31Z 2023-05-30T20:08:04Z MEMBER

No, this should still work: sh conda create -n test -c conda-forge xarray ipython python=3.11 cfgrib pooch conda activate test ipython ```python import xarray as xr

xr.tutorial.open_dataset("era5-2mt-2019-03-uk.grib") `` We somewhat recently dropped the builtincfgribengine in favor of the one provided by thecfgribpackage (and that is also the reason why the example in the docs fails:cfgrib` is not installed into the docs environment anymore, which is definitely an oversight).

Which version of cfgrib do you have? For reference, in the environment built with the command above I have cfgrib=0.9.10.3

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1568737270 https://github.com/pydata/xarray/issues/7884#issuecomment-1568737270 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dgQf2 leamhowe 48015835 2023-05-30T16:32:00Z 2023-05-30T16:32:00Z NONE

Thanks for getting back to me!

cfgrib is installed. I believe it might be a case that grib files are no longer readable in this way that I am following from: https://docs.xarray.dev/en/stable/examples/ERA5-GRIB-example.html

As there are error messages on this example page.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1568728602 https://github.com/pydata/xarray/pull/7876#issuecomment-1568728602 https://api.github.com/repos/pydata/xarray/issues/7876 IC_kwDOAMm_X85dgOYa keewis 14808389 2023-05-30T16:25:25Z 2023-05-30T16:25:25Z MEMBER

great, thanks for the confirmation!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate the `cdms2` conversion methods 1729709527
1568726002 https://github.com/pydata/xarray/issues/7884#issuecomment-1568726002 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dgNvy keewis 14808389 2023-05-30T16:23:30Z 2023-05-30T16:23:30Z MEMBER

as stated by the exception, the cfgrib engine is unknown, which usually means you're missing the cfgrib package (or this is a environment issue). If you did indeed install it, can you post the output of either conda list or pip list (in case you're not using conda)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1568648350 https://github.com/pydata/xarray/issues/7884#issuecomment-1568648350 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85df6ye welcome[bot] 30606887 2023-05-30T15:32:08Z 2023-05-30T15:32:08Z NONE

Thanks for opening your first issue here at xarray! Be sure to follow the issue template! If you have an idea for a solution, we would really welcome a Pull Request with proposed changes. See the Contributing Guide for more. It may take us a while to respond here, but we really value your contribution. Contributors like you help make xarray better. Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1568557130 https://github.com/pydata/xarray/issues/7871#issuecomment-1568557130 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dfkhK mathause 10194086 2023-05-30T14:40:50Z 2023-05-30T14:40:50Z MEMBER

I am closing this. Feel free to re-open/ or open a new issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1567450094 https://github.com/pydata/xarray/pull/7880#issuecomment-1567450094 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85dbWPu headtr1ck 43316012 2023-05-29T19:28:21Z 2023-05-29T19:28:21Z COLLABORATOR

I think this is intended (though certainly not very easy to get right): see the second part of the warning in the __del__ documentation.

You are right, that warning is exactly what is causing the issues.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1567446747 https://github.com/pydata/xarray/pull/7880#issuecomment-1567446747 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85dbVbb keewis 14808389 2023-05-29T19:22:12Z 2023-05-29T19:22:12Z MEMBER

I would have thought that the global (or module level here) variable/function aquire should have at least one reference until after the deletion of the object.

I think this is intended (though certainly not very easy to get right): see the second part of the warning in the __del__ documentation.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1567439206 https://github.com/pydata/xarray/pull/7880#issuecomment-1567439206 https://api.github.com/repos/pydata/xarray/issues/7880 IC_kwDOAMm_X85dbTlm headtr1ck 43316012 2023-05-29T19:09:41Z 2023-05-29T19:09:41Z COLLABORATOR

That's quite a weird bug. I would have thought that the global (or module level here) variable/function aquire should have at least one reference until after the deletion of the object. Is that a bug in pythons garbage collection?

Or does the garbage collection already start when calling del and does not wait for the completion of the __del__ method?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  don't use `CacheFileManager.__del__` on interpreter shutdown 1730664352
1567366415 https://github.com/pydata/xarray/issues/7879#issuecomment-1567366415 https://api.github.com/repos/pydata/xarray/issues/7879 IC_kwDOAMm_X85dbB0P keewis 14808389 2023-05-29T17:22:09Z 2023-05-29T17:22:09Z MEMBER

If I'm reading the different issues correctly, that means this is a duplicate of #7079

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  occasional segfaults on CI 1730451312
1567319929 https://github.com/pydata/xarray/issues/7879#issuecomment-1567319929 https://api.github.com/repos/pydata/xarray/issues/7879 IC_kwDOAMm_X85da2d5 huard 81219 2023-05-29T16:15:49Z 2023-05-29T16:15:49Z CONTRIBUTOR

There are similar segfaults in an xncml PR: https://github.com/xarray-contrib/xncml/pull/48

Googling around suggest it is related to netCDF not being thread-safe and recent python-netcdf4 releasing the GIL.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  occasional segfaults on CI 1730451312
1567154608 https://github.com/pydata/xarray/issues/2697#issuecomment-1567154608 https://api.github.com/repos/pydata/xarray/issues/2697 IC_kwDOAMm_X85daOGw keewis 14808389 2023-05-29T13:41:37Z 2023-05-29T13:41:37Z MEMBER

closing, since anything still missing should be feature requests for xncml

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  read ncml files to create multifile datasets 401874795
1567147143 https://github.com/pydata/xarray/issues/893#issuecomment-1567147143 https://api.github.com/repos/pydata/xarray/issues/893 IC_kwDOAMm_X85daMSH keewis 14808389 2023-05-29T13:33:58Z 2023-05-29T13:34:49Z MEMBER

I think this has been fixed by xncml and/or kerchunk.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  'Warm start' for open_mfdataset? 163267018
1567109753 https://github.com/pydata/xarray/pull/7827#issuecomment-1567109753 https://api.github.com/repos/pydata/xarray/issues/7827 IC_kwDOAMm_X85daDJ5 spencerkclark 6628425 2023-05-29T13:00:30Z 2023-05-29T13:00:30Z MEMBER

One other tricky edge case that occurs to me is one where an extreme fill value (e.g. 1e30) is used for floating point fields. If we decode the times first, it might appear that the dates cannot be represented as nanosecond-precision values, but in reality they would be. We may need to think more about how to handle this edge case in addition to #7817.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Preserve nanosecond resolution when encoding/decoding times 1700227455
1567037851 https://github.com/pydata/xarray/issues/7814#issuecomment-1567037851 https://api.github.com/repos/pydata/xarray/issues/7814 IC_kwDOAMm_X85dZxmb keewis 14808389 2023-05-29T11:53:37Z 2023-05-29T12:10:00Z MEMBER

Actually, it goes away with pip install jinja2. We don't use jinja2 at all, so either this is some kind of weird effect on garbage collection (a timing issue?), or dask is doing something differently as soon as jinja2 is available.

Edit: most likely this is a timing issue... the offending line tries to make use of the internal acquire function, which I think at that point has already been destroyed. To fix that, I think we need to somehow store a reference on the file manager?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  TypeError: 'NoneType' object is not callable when joining netCDF files. Works when ran interactively. 1695028906
1567025628 https://github.com/pydata/xarray/issues/7814#issuecomment-1567025628 https://api.github.com/repos/pydata/xarray/issues/7814 IC_kwDOAMm_X85dZunc keewis 14808389 2023-05-29T11:41:06Z 2023-05-29T11:56:20Z MEMBER

I can reproduce this locally: - download and unpack the files from https://github.com/pydata/xarray/issues/7814#issuecomment-1535168128 - use mamba create -n test python=3.11 xarray netcdf4 to create the environment (note: no dask) - use pip install "dask[array]" to install dask (does not pull distributed like the package from conda-forge) - put the code into a script and execute it

For reference, the full traceback is: pytb Exception ignored in: <function CachingFileManager.__del__ at 0x7fbdb237b430> Traceback (most recent call last): File "/home/jmagin/.local/opt/mambaforge/envs/test/lib/python3.9/site-packages/xarray/backends/file_manager.py", line 246, in __del__ TypeError: 'NoneType' object is not callable

As far as I can tell, this means we're using something from distributed with a broken fallback, since the error goes away as soon as I install distributed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  TypeError: 'NoneType' object is not callable when joining netCDF files. Works when ran interactively. 1695028906
1564629142 https://github.com/pydata/xarray/pull/7874#issuecomment-1564629142 https://api.github.com/repos/pydata/xarray/issues/7874 IC_kwDOAMm_X85dQliW welcome[bot] 30606887 2023-05-26T16:19:38Z 2023-05-26T16:19:38Z NONE

Congratulations on completing your first pull request! Welcome to Xarray! We are proud of you, and hope to see you again!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Changed duck typing exception to: (ImportError, AttributeError) 1725525753
1563788348 https://github.com/pydata/xarray/pull/7875#issuecomment-1563788348 https://api.github.com/repos/pydata/xarray/issues/7875 IC_kwDOAMm_X85dNYQ8 Illviljan 14371165 2023-05-26T04:17:07Z 2023-05-26T04:18:08Z MEMBER

cos is a float operation so I would lean towards using a isclose-check: xr.testing.assert_allclose(a + 1, np.cos(a)).

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  defer to `numpy` for the expected result 1726529405
1563078509 https://github.com/pydata/xarray/issues/7856#issuecomment-1563078509 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85dKq9t frazane 62377868 2023-05-25T15:10:04Z 2023-05-25T15:19:49Z CONTRIBUTOR

Same issue here. I installed xarray with conda/mamba (not a dev install).

``` INSTALLED VERSIONS


commit: None python: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 08:57:19) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.42.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.0 libnetcdf: 4.9.2

xarray: 2023.4.2 pandas: 2.0.1 numpy: 1.24.3 scipy: 1.10.1 netCDF4: 1.6.3 h5netcdf: None h5py: None zarr: 2.14.2 dask: 2023.4.1 distributed: None pip: 23.1.2 IPython: 8.13.1 ```

Edit: downgrading to 2023.4.0 solved the issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1563092362 https://github.com/pydata/xarray/issues/7856#issuecomment-1563092362 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85dKuWK keewis 14808389 2023-05-25T15:19:26Z 2023-05-25T15:19:26Z MEMBER

how did you set up your environment? This works for me: sh mamba create -n test python=3.11 xarray dask netcdf4 pooch ipython mamba activate test ipython python xr.tutorial.open_dataset("rasm", chunks={})

Interestingly enough, though, is that you should only see this with xarray=2023.5.0, while your environment claims to have xarray=2023.4.2. It seems there is something wrong with your environment?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1562734279 https://github.com/pydata/xarray/issues/7871#issuecomment-1562734279 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dJW7H gkb999 7091088 2023-05-25T11:23:44Z 2023-05-25T11:23:44Z NONE

Yes float64 should cause less imprecision. You can convert using astype:

```python import numpy as np import xarray as xr

da = xr.DataArray(np.array([1, 2], dtype=np.float32))

da = da.astype(float) ```

As for the other problems I think you are better of asking the people over at rioxarray. However, you should first gather all the steps you did to convert the data as code. This way it is easier to see what you are actually doing.

Thanks for getting back. I did post in rioxarray and yet, the last step I mentioned isn't successful there too. I'll post the code maybe 8hrs from here(can reach out to my sys then). Thanks for all the helpful suggestions so far. Really helpful.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562707652 https://github.com/pydata/xarray/issues/7871#issuecomment-1562707652 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dJQbE mathause 10194086 2023-05-25T11:02:29Z 2023-05-25T11:02:29Z MEMBER

Yes float64 should cause less imprecision. You can convert using astype:

```python import numpy as np import xarray as xr

da = xr.DataArray(np.array([1, 2], dtype=np.float32))

da = da.astype(float) ```

As for the other problems I think you are better of asking the people over at rioxarray. However, you should first gather all the steps you did to convert the data as code. This way it is easier to see what you are actually doing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562698250 https://github.com/pydata/xarray/issues/7871#issuecomment-1562698250 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dJOIK gkb999 7091088 2023-05-25T10:55:09Z 2023-05-25T10:55:09Z NONE

xarray handles nan values and ignores them per default - so you don't need to remove them. For example:

```python import numpy as np import xarray as xr

da = xr.DataArray([1, 2, 3, np.nan]) da.mean() ```

This is really helpful as I didn't know this before.

If you have precision problems - that might be because you have float32 values.

Which format would not cause the issue in that case float 64? If yes, can we manually convert?

I don't know what goes wrong with your lon values - that is an issue in the reprojection. You could convert them to 0...360 by using

python lon_dim = "x" new_lon = np.mod(da[lon_dim], 360) da = da.assign_coords(**{lon_dim: new_lon}) da.reindex(**{lon_dim : np.sort(da[lon_dim])})

Yeah. I have done the 180 to 360 deg conversions before. But the issue is more of with rioxarray reprojection I feel The internet data is in meters, as I wanted in degrees/lat-lon format, I converted the data from polar stereographic to wgs84. This converted the datas coordinates to degrees, latitudes are perfect. But longitude are arranged to -180 to +180 instead of 160E to 199W. I as well tried wrapping longitude to 0-360, but it should technically fall in 160-200 range while the long show all 0-360 and stretch throughout, which isn't right.

So, converting the existing gridded data (in meters) to lat-lon projection without affecting the resolution and without nan is my ultimate aim/objective. I successfully converted data to lat-lon and clipped to region but, it drastically changed the resolution like around 20 times maybe. Preserving the resolution is very imp for my work. So, that's the issue with longitudes

Thanks for your time if you went through this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562648682 https://github.com/pydata/xarray/pull/7874#issuecomment-1562648682 https://api.github.com/repos/pydata/xarray/issues/7874 IC_kwDOAMm_X85dJCBq welcome[bot] 30606887 2023-05-25T10:15:41Z 2023-05-25T10:15:41Z NONE

Thank you for opening this pull request! It may take us a few days to respond here, so thank you for being patient. If you have questions, some answers may be found in our contributing guidelines.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Changed duck typing exception to: (ImportError, AttributeError) 1725525753
1562637946 https://github.com/pydata/xarray/issues/7870#issuecomment-1562637946 https://api.github.com/repos/pydata/xarray/issues/7870 IC_kwDOAMm_X85dI_Z6 keewis 14808389 2023-05-25T10:07:35Z 2023-05-25T10:07:35Z MEMBER

I agree, this change should be fine.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Name collision with Pulsar Timing package 'PINT'  1722614979
1562615805 https://github.com/pydata/xarray/issues/7870#issuecomment-1562615805 https://api.github.com/repos/pydata/xarray/issues/7870 IC_kwDOAMm_X85dI5_9 vhaasteren 3092444 2023-05-25T09:52:06Z 2023-05-25T09:52:06Z CONTRIBUTOR

Thank you @TomNicholas, that is encouraging to hear. I will wait for @keewis to respond before filing a PR.

FWIW, I have tested the modification I suggest in my fork of xarray, and it works well for our purposes. It just generalizes the exception catch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Name collision with Pulsar Timing package 'PINT'  1722614979
1562605326 https://github.com/pydata/xarray/issues/7871#issuecomment-1562605326 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dI3cO mathause 10194086 2023-05-25T09:44:31Z 2023-05-25T09:44:31Z MEMBER

xarray handles nan values and ignores them per default - so you don't need to remove them. For example: ```python import numpy as np import xarray as xr

da = xr.DataArray([1, 2, 3, np.nan]) da.mean() `` If you have precision problems - that might be because you havefloat32` values.

I don't know what goes wrong with your lon values - that is an issue in the reprojection. You could convert them to 0...360 by using

```python

lon_dim = "x" new_lon = np.mod(da[lon_dim], 360) da = da.assign_coords({lon_dim: new_lon}) da.reindex({lon_dim : np.sort(da[lon_dim])})

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562040566 https://github.com/pydata/xarray/issues/7344#issuecomment-1562040566 https://api.github.com/repos/pydata/xarray/issues/7344 IC_kwDOAMm_X85dGtj2 riley-brady 82663402 2023-05-24T23:12:48Z 2023-05-24T23:12:48Z NONE

I want to add a +1 to disable it by default. It's pretty common to be using float32 precision arrays. I have a rolling mean operation early on in some code and the errors balloon over time in subsequent processes. This was a super obscure bug to track down as well.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Disable bottleneck by default? 1471685307
1561999178 https://github.com/pydata/xarray/issues/7871#issuecomment-1561999178 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dGjdK gkb999 7091088 2023-05-24T22:17:02Z 2023-05-24T22:17:02Z NONE

Well, that does makes sense. I want to calculate anomalies along x-y grids and I'm guessing the nan values are interfering with the results. Also, I have another question which isn't regarding Nan's. if it is right here, I may proceed. (else tag/link to other places/forums relevant). Assuming you must be knowing: I reprojected my nc file from meters to degrees Now, although the projection is right, the values of longitude aren't. python x (x) float64 -179.2 -177.7 ... 177.7 179.2 array([-179.217367, -177.65215 , -176.086933, -174.521715, -172.956498, -171.391281, -169.826063, -168.260846, -166.695629, -165.130412, -163.565194, -161.999977, -160.43476 , -158.869542, 163.565218, 165.130436, 166.695653, 168.26087 , 169.826088, 171.391305, 172.956522, 174.521739, 176.086957, 177.652174, 179.217391]) This is not how it is supposed to be: It should fall with 160-200 longitudes (post wrapping 360)

Is there a way xarray can sort this automatically or do I need to manually reset the cordinates?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1561584592 https://github.com/pydata/xarray/issues/7868#issuecomment-1561584592 https://api.github.com/repos/pydata/xarray/issues/7868 IC_kwDOAMm_X85dE-PQ kmuehlbauer 5821660 2023-05-24T16:50:34Z 2023-05-24T16:50:34Z MEMBER

Thanks @ghiggi for your comment.

The problem is we have at least two contradicting user requests here, see #7328 and #7862.

I'm sure there is a solution to accommodate both sides.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `open_dataset` with `chunks="auto"` fails when a netCDF4 variables/coordinates is encoded as `NC_STRING` 1722417436
1561543105 https://github.com/pydata/xarray/issues/7870#issuecomment-1561543105 https://api.github.com/repos/pydata/xarray/issues/7870 IC_kwDOAMm_X85dE0HB TomNicholas 35968931 2023-05-24T16:31:30Z 2023-05-24T16:31:30Z MEMBER

Thanks for raising this @vhaasteren ! We want to do what we can to support users from all fields of science :)

I would be okay with that change (especially as it's not really special-casing pint-pulsar, so much as generalizing an existing error-catching mechanism), but would defer to the opinion of @keewis on this.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Name collision with Pulsar Timing package 'PINT'  1722614979
1561504841 https://github.com/pydata/xarray/issues/7856#issuecomment-1561504841 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85dEqxJ TomNicholas 35968931 2023-05-24T16:16:41Z 2023-05-24T16:26:15Z MEMBER

Solution for those who just found this issue:

Just re-install xarray. pip install -e . is sufficient. Re-installing any way through pip/conda should register the dask chunkmanager entrypoint.


@Illviljan I brought this up in the xarray team call today and we decided that since this only affects people who have previously cloned the xarray repository, are using a development install, and then updated by pulling changes from main; this problem only affects maybe ~10-20 people worldwide, all of whom are developers who are equipped to quickly solve it.

I'm going to add a note into the what's new entry for this version now - if you think we need to do more then let me know.

EDIT: I added a note to whatsnew in https://github.com/pydata/xarray/commit/69445c62953958488a6b35fafd8b9cfd6c0374a5, and updated the release notes.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1561481756 https://github.com/pydata/xarray/pull/7795#issuecomment-1561481756 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85dElIc trexfeathers 40734014 2023-05-24T16:07:58Z 2023-05-24T16:07:58Z NONE

If you're curious what happened, we had the same problem: https://github.com/SciTools/iris/issues/5280#issuecomment-1525802077

Just wish I'd spotted this sooner but it's quite hard to follow two organisations' repos 😆

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1561358915 https://github.com/pydata/xarray/issues/7868#issuecomment-1561358915 https://api.github.com/repos/pydata/xarray/issues/7868 IC_kwDOAMm_X85dEHJD ghiggi 19285200 2023-05-24T15:20:00Z 2023-05-24T15:20:00Z NONE

Dask array with dtype object can contain whatever python object (i.e. I saw examples of geometry and matplotlib collections within dask arrays with object dtype). As a consequence, dask do not try the conversion to i.e. str to estimate the array size, since there is no clean way AFAIK to attach an attribute to dtype suggesting that the object is actually a string.

With your PR, the dtype is not anymore object when creating the dask.array and this solves the issue I guess. Did I overlooked something?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `open_dataset` with `chunks="auto"` fails when a netCDF4 variables/coordinates is encoded as `NC_STRING` 1722417436
1561328867 https://github.com/pydata/xarray/issues/5644#issuecomment-1561328867 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85dD_zj malmans2 22245117 2023-05-24T15:02:44Z 2023-05-24T15:02:44Z CONTRIBUTOR

Do you know where the in-place modification is happening? We could just copy there and fix this particular issue.

Not sure, but I'll take a look!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1561317714 https://github.com/pydata/xarray/issues/7873#issuecomment-1561317714 https://api.github.com/repos/pydata/xarray/issues/7873 IC_kwDOAMm_X85dD9FS anmyachev 45976948 2023-05-24T14:56:47Z 2023-05-24T14:56:47Z NONE

We dropped Python 3.8 support prior to the Pandas 2 release and have no plans to backport support at this time.

xref: #7765

Thanks for the answer!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  No `Xarray` conda package compatible with pandas>=2 for python 3.8 1724137371
1561308333 https://github.com/pydata/xarray/pull/7862#issuecomment-1561308333 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dD6yt tomwhite 85085 2023-05-24T14:51:23Z 2023-05-24T14:51:23Z CONTRIBUTOR

So it looks like the changes here with the fix in my branch will get your issue resolved @tomwhite, right?

Yes - thanks!

I'm a bit worried, that this might break other users workflows, if they depend on the current conversion to floating point for some reason.

The floating point default is preserved if you do e.g. xr.Dataset({"a": np.array([], dtype=object)}). The change here will only convert to string if there is extra metadata present that says it is a string.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561302572 https://github.com/pydata/xarray/issues/7873#issuecomment-1561302572 https://api.github.com/repos/pydata/xarray/issues/7873 IC_kwDOAMm_X85dD5Ys jhamman 2443309 2023-05-24T14:47:56Z 2023-05-24T14:47:56Z MEMBER

We dropped Python 3.8 support prior to the Pandas 2 release and have no plans to backport support at this time.

xref: #7765

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
  No `Xarray` conda package compatible with pandas>=2 for python 3.8 1724137371
1561285499 https://github.com/pydata/xarray/pull/7862#issuecomment-1561285499 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dD1N7 kmuehlbauer 5821660 2023-05-24T14:37:58Z 2023-05-24T14:37:58Z MEMBER

Thanks for trying. I can't think of any downsides for the netcdf4-fix, as it just adds the needed metadata to the object-dtype. But you never know, so it would be good to get another set of eyes on it.

So it looks like the changes here with the fix in my branch will get your issue resolved @tomwhite, right?

I'm a bit worried, that this might break other users workflows, if they depend on the current conversion to floating point for some reason. Also other backends might rely on this feature. Especially because this has been there since the early days when xarray was known as xray.

@dcherian What would be the way to go here?

There is also a somehow contradicting issue in #7868.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561269845 https://github.com/pydata/xarray/issues/7873#issuecomment-1561269845 https://api.github.com/repos/pydata/xarray/issues/7873 IC_kwDOAMm_X85dDxZV welcome[bot] 30606887 2023-05-24T14:29:15Z 2023-05-24T14:29:15Z NONE

Thanks for opening your first issue here at xarray! Be sure to follow the issue template! If you have an idea for a solution, we would really welcome a Pull Request with proposed changes. See the Contributing Guide for more. It may take us a while to respond here, but we really value your contribution. Contributors like you help make xarray better. Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  No `Xarray` conda package compatible with pandas>=2 for python 3.8 1724137371
1561240314 https://github.com/pydata/xarray/pull/7862#issuecomment-1561240314 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dDqL6 tomwhite 85085 2023-05-24T14:12:49Z 2023-05-24T14:12:49Z CONTRIBUTOR

Could you verify the above example, please?

The code looks fine, and I get the same result when I run it with this PR.

Your fix in https://github.com/kmuehlbauer/xarray/tree/preserve-vlen-string-dtype changes the metadata so it is correctly preserved as metadata: {'element_type': <class 'str'>}.

I feel less qualified to evaluate the impact of the netcdf4 fix.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561214028 https://github.com/pydata/xarray/issues/7868#issuecomment-1561214028 https://api.github.com/repos/pydata/xarray/issues/7868 IC_kwDOAMm_X85dDjxM kmuehlbauer 5821660 2023-05-24T13:58:16Z 2023-05-24T13:58:16Z MEMBER

My main question here is, why is dask not trying to retrieve the object types from dtype.metadata? Or does it and fail for some reason?.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `open_dataset` with `chunks="auto"` fails when a netCDF4 variables/coordinates is encoded as `NC_STRING` 1722417436
1561195832 https://github.com/pydata/xarray/pull/7862#issuecomment-1561195832 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dDfU4 kmuehlbauer 5821660 2023-05-24T13:52:04Z 2023-05-24T13:52:04Z MEMBER

@tomwhite I've put a commit with changes to zarr/netcdf4-backends which should preserve the dtype metadata here: https://github.com/kmuehlbauer/xarray/tree/preserve-vlen-string-dtype.

I'm not really sure if that is the right location, but as it was already present that location at netcdf4-backend I think it will do.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561173824 https://github.com/pydata/xarray/issues/5644#issuecomment-1561173824 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85dDZ9A dcherian 2448579 2023-05-24T13:39:30Z 2023-05-24T13:39:30Z MEMBER

Do you know where the in-place modification is happening? We could just copy there and fix this particular issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1561162311 https://github.com/pydata/xarray/pull/7862#issuecomment-1561162311 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dDXJH kmuehlbauer 5821660 2023-05-24T13:32:26Z 2023-05-24T13:32:57Z MEMBER

@tomwhite Special casing on netcdf4 backend should be possible, too.

But it might need fixing at zarr backend, too:

python ds = xr.Dataset({"a": np.array([], dtype=xr.coding.strings.create_vlen_dtype(str))}) print(f"dtype: {ds['a'].dtype}") print(f"metadata: {ds['a'].dtype.metadata}") ds.to_zarr("a.zarr") print("\n### Loading ###") with xr.open_dataset("a.zarr", engine="zarr") as ds: print(f"dtype: {ds['a'].dtype}") print(f"metadata: {ds['a'].dtype.metadata}") ```python dtype: object metadata: {'element_type': <class 'str'>}

Loading

dtype: object metadata: None ```

Could you verify the above example, please? I'm relatively new to zarr :grimacing:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561143111 https://github.com/pydata/xarray/pull/7862#issuecomment-1561143111 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dDSdH tomwhite 85085 2023-05-24T13:23:18Z 2023-05-24T13:23:18Z CONTRIBUTOR

Thanks for taking a look @kmuehlbauer and for the useful example code. I hadn't considered the netcdf cases, so thanks for pointing those out.

Engine netcdf4 does not roundtrip here, losing the dtype metadata information. There is special casing for h5netcdf backend, though.

Could netcdf4 do the same special-casing as h5netcdf?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561096393 https://github.com/pydata/xarray/issues/5644#issuecomment-1561096393 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85dDHDJ headtr1ck 43316012 2023-05-24T12:56:48Z 2023-05-24T12:56:48Z COLLABORATOR

You can always reach out to the creator of the original PR and comment in the PR.

But it looks like this particular PR was reaching a dead end and should be completely rewritten. But anyway the reviewers left helpful remarks on how to proceed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1561093283 https://github.com/pydata/xarray/pull/7551#issuecomment-1561093283 https://api.github.com/repos/pydata/xarray/issues/7551 IC_kwDOAMm_X85dDGSj garciampred 99014432 2023-05-24T12:54:46Z 2023-05-24T12:55:08Z CONTRIBUTOR

This is currently stuck waiting until the problems with the last netcdf-c versions are fixed in a new release. See the issues (https://github.com/pydata/xarray/issues/7388).

When they are fixed I will write the tests If I have time. But of course any help and suggestions are welcomed.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for the new compression arguments. 1596511582
1561052487 https://github.com/pydata/xarray/issues/7872#issuecomment-1561052487 https://api.github.com/repos/pydata/xarray/issues/7872 IC_kwDOAMm_X85dC8VH welcome[bot] 30606887 2023-05-24T12:37:46Z 2023-05-24T12:37:46Z NONE

Thanks for opening your first issue here at xarray! Be sure to follow the issue template! If you have an idea for a solution, we would really welcome a Pull Request with proposed changes. See the Contributing Guide for more. It may take us a while to respond here, but we really value your contribution. Contributors like you help make xarray better. Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `Dataset.to_array()` throws `IndexError` for empty datasets 1723889854
1561014651 https://github.com/pydata/xarray/pull/7551#issuecomment-1561014651 https://api.github.com/repos/pydata/xarray/issues/7551 IC_kwDOAMm_X85dCzF7 sfinkens 1991007 2023-05-24T12:15:18Z 2023-05-24T12:15:18Z NONE

@markelg Thanks a lot for adding this! Do you have time to finalize it in the near future? If not, I could also take a look at the tests if you like.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for the new compression arguments. 1596511582
1560777789 https://github.com/pydata/xarray/issues/7871#issuecomment-1560777789 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dB5Q9 mathause 10194086 2023-05-24T09:32:46Z 2023-05-24T09:32:46Z MEMBER

Yes but there are less - so as mentioned it removes all columns/ rows with only nans, if there is at least one non-nan value the row is kept.

What is the reason that you want to get rid of the nan values?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560674198 https://github.com/pydata/xarray/issues/7868#issuecomment-1560674198 https://api.github.com/repos/pydata/xarray/issues/7868 IC_kwDOAMm_X85dBf-W kmuehlbauer 5821660 2023-05-24T08:27:11Z 2023-05-24T08:27:11Z MEMBER

@ghiggi Glad it works, but we still have to check if that is the correct location for the fix, as it's not CF specific.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `open_dataset` with `chunks="auto"` fails when a netCDF4 variables/coordinates is encoded as `NC_STRING` 1722417436

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 3200.262ms · About: xarray-datasette