home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

10 rows where author_association = "NONE" and user = 2656596 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 9

  • `xr.open_dataset` with `pydapdatastore` raises `too many indices for array` error 2
  • MultiIndex serialization to NetCDF 1
  • Append along an unlimited dimension to an existing netCDF file 1
  • Serializing attrs 1
  • datetime parser different between 32bit and 64bit installations 1
  • Remote writing NETCDF4 files to Amazon S3 1
  • Should performance be equivalent when opening with chunks or re-chunking a dataset? 1
  • Add defaults during concat 508 1
  • jupyter repr caching deleted netcdf file 1

user 1

  • mullenkamp · 10 ✖

author_association 1

  • NONE · 10 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1267571723 https://github.com/pydata/xarray/issues/4240#issuecomment-1267571723 https://api.github.com/repos/pydata/xarray/issues/4240 IC_kwDOAMm_X85LjZwL mullenkamp 2656596 2022-10-04T20:58:37Z 2022-10-04T21:00:08Z NONE

Running xarray.backends.file_manager.FILE_CACHE.clear() fixed the issue for me. I couldn't find any other way to stop xarray from pulling up some old data from a newly saved file. I'm using the h5netcdf engine with xarray version 2022.6.0 by the way.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  jupyter repr caching deleted netcdf file 662505658
906023525 https://github.com/pydata/xarray/issues/3486#issuecomment-906023525 https://api.github.com/repos/pydata/xarray/issues/3486 IC_kwDOAMm_X842ANJl mullenkamp 2656596 2021-08-26T02:19:50Z 2021-08-26T02:19:50Z NONE

This seems to be an ongoing problem (Unexpected behaviour when chunking with multiple netcdf files in xarray/dask, Performance of chunking in xarray / dask when opening and re-chunking a dataset) that has not been resolved nor has feedback been provided.

I've been running into this problem trying to handle netcdfs that are larger than my RAM. From my testing, chunks must be passed with open_mfdataset to be of any use. The chunks method on the datatset after opening seems to do nothing in this use case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Should performance be equivalent when opening with chunks or re-chunking a dataset?  517799069
871210504 https://github.com/pydata/xarray/pull/3545#issuecomment-871210504 https://api.github.com/repos/pydata/xarray/issues/3545 MDEyOklzc3VlQ29tbWVudDg3MTIxMDUwNA== mullenkamp 2656596 2021-06-30T08:41:29Z 2021-06-30T08:41:29Z NONE

Has this been implemented? Or is it still failing the tests?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add defaults during concat 508 524043729
723528226 https://github.com/pydata/xarray/issues/2995#issuecomment-723528226 https://api.github.com/repos/pydata/xarray/issues/2995 MDEyOklzc3VlQ29tbWVudDcyMzUyODIyNg== mullenkamp 2656596 2020-11-08T04:13:39Z 2020-11-08T04:13:39Z NONE

Hi all,

I'd love to have an effective method to save a netcdf4 Dataset to a bytes object (for the S3 purpose specifically). I'm currently using netcdf3 through scipy as described earlier which works fine, but I'm just missing out on some newer netcdf4 options as a consequence.

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remote writing NETCDF4 files to Amazon S3 449706080
509039163 https://github.com/pydata/xarray/issues/2993#issuecomment-509039163 https://api.github.com/repos/pydata/xarray/issues/2993 MDEyOklzc3VlQ29tbWVudDUwOTAzOTE2Mw== mullenkamp 2656596 2019-07-07T23:30:40Z 2019-07-07T23:30:40Z NONE

After a little bit of testing, I've found out that setting decode_cf=False reads it in without an error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xr.open_dataset` with `pydapdatastore` raises `too many indices for array` error 449004641
509011157 https://github.com/pydata/xarray/issues/2993#issuecomment-509011157 https://api.github.com/repos/pydata/xarray/issues/2993 MDEyOklzc3VlQ29tbWVudDUwOTAxMTE1Nw== mullenkamp 2656596 2019-07-07T16:00:38Z 2019-07-07T16:00:38Z NONE

I would also like to see if this issue could be looked at. I'm also trying to query the NASA server. This used to work in previous versions of xarray. Thanks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xr.open_dataset` with `pydapdatastore` raises `too many indices for array` error 449004641
450477528 https://github.com/pydata/xarray/issues/1672#issuecomment-450477528 https://api.github.com/repos/pydata/xarray/issues/1672 MDEyOklzc3VlQ29tbWVudDQ1MDQ3NzUyOA== mullenkamp 2656596 2018-12-29T09:01:45Z 2018-12-29T09:01:45Z NONE

I would love to have this capability. As @shoyer mentioned, for adding time steps of any sort to existing netcdf files would be really beneficial. The only real alternative is to save a netcdf file for each additional time step...even if there are tons of time steps and each file is a couple hundred KBs (which is my situation with NASA data).

I'll look into it if I get some time...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Append along an unlimited dimension to an existing netCDF file 269700511
420875176 https://github.com/pydata/xarray/issues/2411#issuecomment-420875176 https://api.github.com/repos/pydata/xarray/issues/2411 MDEyOklzc3VlQ29tbWVudDQyMDg3NTE3Ng== mullenkamp 2656596 2018-09-13T03:54:17Z 2018-09-13T03:54:17Z NONE

I'm very sorry. It turns out that it was an old dependency issue with the 64bit version. I thought I had updated both...but they were not...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  datetime parser different between 32bit and 64bit installations 359307319
341241720 https://github.com/pydata/xarray/issues/1681#issuecomment-341241720 https://api.github.com/repos/pydata/xarray/issues/1681 MDEyOklzc3VlQ29tbWVudDM0MTI0MTcyMA== mullenkamp 2656596 2017-11-01T21:03:10Z 2017-11-02T04:45:12Z NONE

Thanks for the reply. I'm currently using netcdf4 version 1.2.2, which seems slightly old but that's the default conda package. Ok, so just to clarify... The TypeError is correct in stating that each attribute value must be "a number, string, ndarray or a list/tuple of numbers/strings"? Those are the only options, correct?

Thanks again.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Serializing attrs 270440308
285972485 https://github.com/pydata/xarray/issues/1077#issuecomment-285972485 https://api.github.com/repos/pydata/xarray/issues/1077 MDEyOklzc3VlQ29tbWVudDI4NTk3MjQ4NQ== mullenkamp 2656596 2017-03-12T20:12:42Z 2017-03-12T20:12:42Z NONE

I would love to have this functionality as well. Unfortunately, I'm not knowledgeable enough to help decide on the internal structure for multiindeces though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  MultiIndex serialization to NetCDF 187069161

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.629ms · About: xarray-datasette