html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/7713#issuecomment-1526051246,https://api.github.com/repos/pydata/xarray/issues/7713,1526051246,IC_kwDOAMm_X85a9bGu,44142765,2023-04-27T17:11:09Z,2023-04-27T17:11:34Z,NONE,"@kmuehlbauer It looks like a bug in the code if indeed tuples are meant to be treated the same as any sequence of data. These lines https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/variable.py#L259-L260 suggest that when a tuple is passed in, it is converted to a 0-dimension array of type object via https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/utils.py#L344-L348
Maybe removing the tuple type check and relying on this line https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/variable.py#L287-L288 is better?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1652227927
https://github.com/pydata/xarray/issues/7377#issuecomment-1478331489,https://api.github.com/repos/pydata/xarray/issues/7377,1478331489,IC_kwDOAMm_X85YHYxh,44142765,2023-03-21T17:42:29Z,2023-03-21T17:42:29Z,NONE,">
Does this work for an array of quantiles and also does it require the time coordinate to have a single chunk?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1497031605
https://github.com/pydata/xarray/issues/6891#issuecomment-1474097791,https://api.github.com/repos/pydata/xarray/issues/6891,1474097791,IC_kwDOAMm_X85X3PJ_,44142765,2023-03-17T16:30:58Z,2023-03-17T16:31:12Z,NONE,"@alrho007 I still get this error using version `2022.9.0`:
```python
In [1]: import pandas as pd [5/182]
...: import xarray as xr
...: import numpy as np
...:
...: da = xr.DataArray(
...:
...: np.random.rand(4, 3),
...:
...: [
...:
...: (""time"", pd.date_range(""2000-01-01"", periods=4)),
...:
...: (""space"", [""IA"", ""IL"", ""IN""]),
...:
...: ],
...:
...: )
...: da.curvefit(coords=[""time""], func=lambda x, params: x, method=""trf"")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 18
3 import numpy as np
5 da = xr.DataArray(
6
7 np.random.rand(4, 3),
(...)
16
17 )
---> 18 da.curvefit(coords=[""time""], func=lambda x, params: x, method=""trf"")
TypeError: curvefit() got an unexpected keyword argument 'method'
In [2]: xr.__version__
Out[2]: '2022.9.0'
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1331985070
https://github.com/pydata/xarray/issues/3653#issuecomment-1464128552,https://api.github.com/repos/pydata/xarray/issues/3653,1464128552,IC_kwDOAMm_X85XRNQo,44142765,2023-03-10T17:24:35Z,2023-03-10T17:26:06Z,NONE,"
> More concise syntax for the same thing
>
> ```python
> import xarray as xr
> import fsspec
>
> url = 'https://www.ldeo.columbia.edu/~rpa/NOAA_NCDC_ERSST_v3b_SST.nc'
> with fsspec.open(url) as fobj:
> ds = xr.open_dataset(fobj)
> print(ds)
> ```
When trying this snippet more than once I get a `ValueError: I/O operation on closed file.` exception. Any Idead why this might be the case?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,543197350
https://github.com/pydata/xarray/issues/4122#issuecomment-1453906696,https://api.github.com/repos/pydata/xarray/issues/4122,1453906696,IC_kwDOAMm_X85WqNsI,44142765,2023-03-03T18:08:07Z,2023-03-03T18:08:07Z,NONE,"Based on the docs
> The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).
It appears scipy engine is safe is one does not need to be bothered with specifying engines.By the way, what are the limitations of the `netcdf3` standard vs `netcdf4`?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,631085856
https://github.com/pydata/xarray/issues/4122#issuecomment-1453897364,https://api.github.com/repos/pydata/xarray/issues/4122,1453897364,IC_kwDOAMm_X85WqLaU,44142765,2023-03-03T18:00:33Z,2023-03-03T18:00:33Z,NONE,"I never needed to specify an engine when writing, you only need it when reading the file. I use the `engine=""scipy""` one for reading.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,631085856
https://github.com/pydata/xarray/issues/4122#issuecomment-1401040677,https://api.github.com/repos/pydata/xarray/issues/4122,1401040677,IC_kwDOAMm_X85Tgi8l,44142765,2023-01-23T21:49:46Z,2023-01-23T21:52:29Z,NONE,"What didn't work:
```python
f = fsspec.filesystem(""s3"", anon=False)
with f.open(""some_bucket/some_remote_destination.nc"", mode=""wb"") as ff:
xr.open_dataset(""some_local_file.nc"").to_netcdf(ff)
```
this results in a `OSError: [Errno 29] Seek only available in read mode` exception
Changing the above to
```python
with fsspec.open(""simplecache::s3://some_bucket/some_remote_destination.nc"", mode=""wb"") as ff:
xr.open_dataset(""some_local_file.nc"").to_netcdf(ff)
```
fixed it.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,631085856
https://github.com/pydata/xarray/issues/4122#issuecomment-1400564474,https://api.github.com/repos/pydata/xarray/issues/4122,1400564474,IC_kwDOAMm_X85Teur6,44142765,2023-01-23T15:44:20Z,2023-01-23T15:44:20Z,NONE,"> '/silt/usgs/Projects/stellwagen/CF-1.6/BUZZ_BAY/2651-A.cdf')
> outfile = fsspec.open('simpl
Thanks, this actually worked for me. It seems as though initializing an s3 store using `fs = fsspec.S3FileSystem(...)` beforehand and using it as a context manager via `with fs.open(...) as out: data.to_netcdf(out)` caused the failure.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,631085856
https://github.com/pydata/xarray/issues/4122#issuecomment-1400519887,https://api.github.com/repos/pydata/xarray/issues/4122,1400519887,IC_kwDOAMm_X85TejzP,44142765,2023-01-23T15:16:21Z,2023-01-23T15:16:21Z,NONE,Is there any reliable to use to write a xr.Dataset object as a netcdf file in 2023? I tried using the above approach with `fsspec` but I keep getting a `OSError: [Errno 29] Seek only available in read mode` exception.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,631085856
https://github.com/pydata/xarray/issues/508#issuecomment-1308665605,https://api.github.com/repos/pydata/xarray/issues/508,1308665605,IC_kwDOAMm_X85OAKcF,44142765,2022-11-09T12:16:13Z,2022-11-09T12:16:13Z,NONE,Any plans to support this?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,98587746
https://github.com/pydata/xarray/issues/5888#issuecomment-1253668750,https://api.github.com/repos/pydata/xarray/issues/5888,1253668750,IC_kwDOAMm_X85KuXeO,44142765,2022-09-21T12:53:18Z,2022-09-21T12:53:18Z,NONE,"I experienced something similar, but instead when using a string glob. The `encoding[""source""]` causes a `KeyError` exception when trying to preprocess the data loaded by `open_mfdataset`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1033950863
https://github.com/pydata/xarray/issues/6395#issuecomment-1074074385,https://api.github.com/repos/pydata/xarray/issues/6395,1074074385,IC_kwDOAMm_X85ABRMR,44142765,2022-03-21T15:56:27Z,2022-03-21T15:56:27Z,NONE,"> This is expected behavior: Xarray variable objects can wrap numpy arrays but generally they don't make copies of the underlying data. So it is possible that two different variables wrap the same numpy array, like in your example and in the example below.
>
> ```python
> data = np.array([1, 2, 3])
>
> v1 = xr.Variable(""x"", data)
> v2 = xr.Variable(""x"", data)
>
> print(v1)
> #
> # array([1, 2, 3])
>
> print(v2)
> #
> # array([1, 2, 3])
>
> data[0] = 10
>
> print(v1)
> #
> # array([10, 2, 3])
>
> print(v2)
> #
> # array([10, 2, 3])
> ```
>
> `Dataset.copy(deep=True)` makes a deep copy, thus copying the underlying data.
Thanks for the quick response. I wasn't aware of this. I had assumed a new Dataset object will have data that is a copy of the underlying `var_map` data pass into it. Maybe this could be mentioned as a note in the docstring somewhere?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1175517164