home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "NONE" and issue = 449706080 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 5

  • NicWayand 1
  • euyuil 1
  • NowanIlfideme 1
  • mullenkamp 1
  • rebeccaringuette 1

issue 1

  • Remote writing NETCDF4 files to Amazon S3 · 5 ✖

author_association 1

  • NONE · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1516635334 https://github.com/pydata/xarray/issues/2995#issuecomment-1516635334 https://api.github.com/repos/pydata/xarray/issues/2995 IC_kwDOAMm_X85aZgTG rebeccaringuette 49281118 2023-04-20T16:38:46Z 2023-04-20T16:38:46Z NONE

Related issue: https://github.com/pydata/xarray/issues/4122

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remote writing NETCDF4 files to Amazon S3 449706080
723528226 https://github.com/pydata/xarray/issues/2995#issuecomment-723528226 https://api.github.com/repos/pydata/xarray/issues/2995 MDEyOklzc3VlQ29tbWVudDcyMzUyODIyNg== mullenkamp 2656596 2020-11-08T04:13:39Z 2020-11-08T04:13:39Z NONE

Hi all,

I'd love to have an effective method to save a netcdf4 Dataset to a bytes object (for the S3 purpose specifically). I'm currently using netcdf3 through scipy as described earlier which works fine, but I'm just missing out on some newer netcdf4 options as a consequence.

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remote writing NETCDF4 files to Amazon S3 449706080
659441282 https://github.com/pydata/xarray/issues/2995#issuecomment-659441282 https://api.github.com/repos/pydata/xarray/issues/2995 MDEyOklzc3VlQ29tbWVudDY1OTQ0MTI4Mg== euyuil 1539596 2020-07-16T14:15:28Z 2020-07-16T14:15:28Z NONE

It looks like #23 is related. Do we have a plan about this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remote writing NETCDF4 files to Amazon S3 449706080
657798184 https://github.com/pydata/xarray/issues/2995#issuecomment-657798184 https://api.github.com/repos/pydata/xarray/issues/2995 MDEyOklzc3VlQ29tbWVudDY1Nzc5ODE4NA== NowanIlfideme 2067093 2020-07-13T21:17:06Z 2020-07-13T21:17:06Z NONE

I ran into this issue, here's a simple workaround that seems to work:

```python def dataset_to_bytes(ds: xr.Dataset, name: str = "my-dataset") -> bytes: """Converts datset to bytes."""

nc4_ds = netCDF4.Dataset(name, mode="w", diskless=True, memory=ds.nbytes)
nc4_store = NetCDF4DataStore(nc4_ds)
dump_to_store(ds, nc4_store)
res_mem = nc4_ds.close()
res_bytes = res_mem.tobytes()
return res_bytes

```

I tested this using the following:

```python import BytesIO

fname = "REDACTED.nc" ds = xr.load_dataset(fname) ds_bytes = dataset_to_bytes(ds) ds2 = xr.load_dataset(BytesIO(ds_bytes))

assert ds2.equals(ds) and all(ds2.attrs[k]==ds.attrs[k] for k in set(ds2.attrs).union(ds.attrs)) ```

The assertion holds true, however the file size on disk is different. It's possible they were saved using different netCDF4 versions, I haven't had time to test that.

I tried using just ds.to_netcdf() but get the following error:

`ValueError: NetCDF 3 does not support type |S32`

That's because it falls back to the 'scipy' engine. Would be nice to have a non-hacky way to write netcdf4 files to byte streams. :smiley:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remote writing NETCDF4 files to Amazon S3 449706080
518869785 https://github.com/pydata/xarray/issues/2995#issuecomment-518869785 https://api.github.com/repos/pydata/xarray/issues/2995 MDEyOklzc3VlQ29tbWVudDUxODg2OTc4NQ== NicWayand 1117224 2019-08-06T22:39:07Z 2019-08-06T22:39:07Z NONE

Is it possible to read mulitple netcdf files on s3 using open_mfdataset?

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 3
}
  Remote writing NETCDF4 files to Amazon S3 449706080

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.65ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows