home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where author_association = "NONE" and user = 44142765 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 8

  • Document writing netcdf from xarray directly to S3 5
  • Ignore missing variables when concatenating datasets? 1
  • "[Errno -90] NetCDF: file not found: b" when opening netCDF from server 1
  • open_[mf]dataset ds.encoding['source'] for pathlib.Path? 1
  • two Dataset objects reference the same numpy array memory block upon creation. 1
  • Passing extra keyword arguments to `curvefit` throws an exception. 1
  • Aggregating a dimension using the Quantiles method with `skipna=True` is very slow 1
  • `Variable/IndexVariable` do not accept a tuple for data. 1

user 1

  • zoj613 · 12 ✖

author_association 1

  • NONE · 12 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1526051246 https://github.com/pydata/xarray/issues/7713#issuecomment-1526051246 https://api.github.com/repos/pydata/xarray/issues/7713 IC_kwDOAMm_X85a9bGu zoj613 44142765 2023-04-27T17:11:09Z 2023-04-27T17:11:34Z NONE

@kmuehlbauer It looks like a bug in the code if indeed tuples are meant to be treated the same as any sequence of data. These lines https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/variable.py#L259-L260 suggest that when a tuple is passed in, it is converted to a 0-dimension array of type object via https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/utils.py#L344-L348

Maybe removing the tuple type check and relying on this line https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/core/variable.py#L287-L288 is better?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `Variable/IndexVariable` do not accept a tuple for data. 1652227927
1478331489 https://github.com/pydata/xarray/issues/7377#issuecomment-1478331489 https://api.github.com/repos/pydata/xarray/issues/7377 IC_kwDOAMm_X85YHYxh zoj613 44142765 2023-03-21T17:42:29Z 2023-03-21T17:42:29Z NONE

Does this work for an array of quantiles and also does it require the time coordinate to have a single chunk?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Aggregating a dimension using the Quantiles method with `skipna=True` is very slow 1497031605
1474097791 https://github.com/pydata/xarray/issues/6891#issuecomment-1474097791 https://api.github.com/repos/pydata/xarray/issues/6891 IC_kwDOAMm_X85X3PJ_ zoj613 44142765 2023-03-17T16:30:58Z 2023-03-17T16:31:12Z NONE

@alrho007 I still get this error using version 2022.9.0: ```python In [1]: import pandas as pd [5/182] ...: import xarray as xr ...: import numpy as np ...: ...: da = xr.DataArray( ...: ...: np.random.rand(4, 3), ...: ...: [ ...: ...: ("time", pd.date_range("2000-01-01", periods=4)), ...: ...: ("space", ["IA", "IL", "IN"]), ...: ...: ], ...: ...: ) ...: da.curvefit(coords=["time"], func=lambda x, params: x, method="trf")


TypeError Traceback (most recent call last) Cell In[1], line 18 3 import numpy as np 5 da = xr.DataArray( 6 7 np.random.rand(4, 3), (...) 16 17 ) ---> 18 da.curvefit(coords=["time"], func=lambda x, params: x, method="trf")

TypeError: curvefit() got an unexpected keyword argument 'method'

In [2]: xr.version Out[2]: '2022.9.0' ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Passing extra keyword arguments to `curvefit` throws an exception. 1331985070
1464128552 https://github.com/pydata/xarray/issues/3653#issuecomment-1464128552 https://api.github.com/repos/pydata/xarray/issues/3653 IC_kwDOAMm_X85XRNQo zoj613 44142765 2023-03-10T17:24:35Z 2023-03-10T17:26:06Z NONE

More concise syntax for the same thing

```python import xarray as xr import fsspec

url = 'https://www.ldeo.columbia.edu/~rpa/NOAA_NCDC_ERSST_v3b_SST.nc' with fsspec.open(url) as fobj: ds = xr.open_dataset(fobj) print(ds) ```

When trying this snippet more than once I get a ValueError: I/O operation on closed file. exception. Any Idead why this might be the case?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  "[Errno -90] NetCDF: file not found: b" when opening netCDF from server 543197350
1453906696 https://github.com/pydata/xarray/issues/4122#issuecomment-1453906696 https://api.github.com/repos/pydata/xarray/issues/4122 IC_kwDOAMm_X85WqNsI zoj613 44142765 2023-03-03T18:08:07Z 2023-03-03T18:08:07Z NONE

Based on the docs

The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).

It appears scipy engine is safe is one does not need to be bothered with specifying engines.By the way, what are the limitations of the netcdf3 standard vs netcdf4?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Document writing netcdf from xarray directly to S3 631085856
1453897364 https://github.com/pydata/xarray/issues/4122#issuecomment-1453897364 https://api.github.com/repos/pydata/xarray/issues/4122 IC_kwDOAMm_X85WqLaU zoj613 44142765 2023-03-03T18:00:33Z 2023-03-03T18:00:33Z NONE

I never needed to specify an engine when writing, you only need it when reading the file. I use the engine="scipy" one for reading.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Document writing netcdf from xarray directly to S3 631085856
1401040677 https://github.com/pydata/xarray/issues/4122#issuecomment-1401040677 https://api.github.com/repos/pydata/xarray/issues/4122 IC_kwDOAMm_X85Tgi8l zoj613 44142765 2023-01-23T21:49:46Z 2023-01-23T21:52:29Z NONE

What didn't work: python f = fsspec.filesystem("s3", anon=False) with f.open("some_bucket/some_remote_destination.nc", mode="wb") as ff: xr.open_dataset("some_local_file.nc").to_netcdf(ff) this results in a OSError: [Errno 29] Seek only available in read mode exception

Changing the above to python with fsspec.open("simplecache::s3://some_bucket/some_remote_destination.nc", mode="wb") as ff: xr.open_dataset("some_local_file.nc").to_netcdf(ff) fixed it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Document writing netcdf from xarray directly to S3 631085856
1400564474 https://github.com/pydata/xarray/issues/4122#issuecomment-1400564474 https://api.github.com/repos/pydata/xarray/issues/4122 IC_kwDOAMm_X85Teur6 zoj613 44142765 2023-01-23T15:44:20Z 2023-01-23T15:44:20Z NONE

'/silt/usgs/Projects/stellwagen/CF-1.6/BUZZ_BAY/2651-A.cdf') outfile = fsspec.open('simpl

Thanks, this actually worked for me. It seems as though initializing an s3 store using fs = fsspec.S3FileSystem(...) beforehand and using it as a context manager via with fs.open(...) as out: data.to_netcdf(out) caused the failure.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Document writing netcdf from xarray directly to S3 631085856
1400519887 https://github.com/pydata/xarray/issues/4122#issuecomment-1400519887 https://api.github.com/repos/pydata/xarray/issues/4122 IC_kwDOAMm_X85TejzP zoj613 44142765 2023-01-23T15:16:21Z 2023-01-23T15:16:21Z NONE

Is there any reliable to use to write a xr.Dataset object as a netcdf file in 2023? I tried using the above approach with fsspec but I keep getting a OSError: [Errno 29] Seek only available in read mode exception.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Document writing netcdf from xarray directly to S3 631085856
1308665605 https://github.com/pydata/xarray/issues/508#issuecomment-1308665605 https://api.github.com/repos/pydata/xarray/issues/508 IC_kwDOAMm_X85OAKcF zoj613 44142765 2022-11-09T12:16:13Z 2022-11-09T12:16:13Z NONE

Any plans to support this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Ignore missing variables when concatenating datasets? 98587746
1253668750 https://github.com/pydata/xarray/issues/5888#issuecomment-1253668750 https://api.github.com/repos/pydata/xarray/issues/5888 IC_kwDOAMm_X85KuXeO zoj613 44142765 2022-09-21T12:53:18Z 2022-09-21T12:53:18Z NONE

I experienced something similar, but instead when using a string glob. The encoding["source"] causes a KeyError exception when trying to preprocess the data loaded by open_mfdataset.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_[mf]dataset ds.encoding['source'] for pathlib.Path? 1033950863
1074074385 https://github.com/pydata/xarray/issues/6395#issuecomment-1074074385 https://api.github.com/repos/pydata/xarray/issues/6395 IC_kwDOAMm_X85ABRMR zoj613 44142765 2022-03-21T15:56:27Z 2022-03-21T15:56:27Z NONE

This is expected behavior: Xarray variable objects can wrap numpy arrays but generally they don't make copies of the underlying data. So it is possible that two different variables wrap the same numpy array, like in your example and in the example below.

```python data = np.array([1, 2, 3])

v1 = xr.Variable("x", data) v2 = xr.Variable("x", data)

print(v1)

<xarray.Variable (x: 3)>

array([1, 2, 3])

print(v2)

<xarray.Variable (x: 3)>

array([1, 2, 3])

data[0] = 10

print(v1)

<xarray.Variable (x: 3)>

array([10, 2, 3])

print(v2)

<xarray.Variable (x: 3)>

array([10, 2, 3])

```

Dataset.copy(deep=True) makes a deep copy, thus copying the underlying data.

Thanks for the quick response. I wasn't aware of this. I had assumed a new Dataset object will have data that is a copy of the underlying var_map data pass into it. Maybe this could be mentioned as a note in the docstring somewhere?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  two Dataset objects reference the same numpy array memory block upon creation. 1175517164

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.387ms · About: xarray-datasette