home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where user = 12728107 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 4

  • load_dataset fails when filename is unicode 2
  • xarray fails on Travis CI 2
  • How should xarray use/support sparse arrays? 1
  • xarray throws FutureWarning when using .interp method 1

user 1

  • pnsaevik · 6 ✖

author_association 1

  • NONE 6
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1280544883 https://github.com/pydata/xarray/issues/7177#issuecomment-1280544883 https://api.github.com/repos/pydata/xarray/issues/7177 IC_kwDOAMm_X85MU5Bz pnsaevik 12728107 2022-10-17T09:18:07Z 2022-10-17T09:18:07Z NONE

I thought I got the newest version of xarray when creating a conda environment from scratch. I see now that this is not the case. Sorry about that!

Upgrading xarray using pip instead of conda fixed the issue.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray throws FutureWarning when using .interp method 1411125069
1119366649 https://github.com/pydata/xarray/issues/3974#issuecomment-1119366649 https://api.github.com/repos/pydata/xarray/issues/3974 IC_kwDOAMm_X85CuC35 pnsaevik 12728107 2022-05-06T08:13:04Z 2022-05-06T08:13:04Z NONE

After investigating the issue, it seems to be related to an error in the netcdf library, which in turn is related to an error in the hdf5 library. The relevant issues are

netcdf4: https://github.com/Unidata/netcdf4-python/issues/941 hdf5: https://forum.hdfgroup.org/t/non-english-characters-in-hdf5-file-name/4627/8

Quoting the dev responding in the hdf5 forum:

I would be thrilled to fix this issue. The problem is that it’s a huge amount of effort with no obvious funding source. Everyone wants this problem to be fixed but nobody wants it fixed so badly that they are willing to pay for an engineer to spend the better part of a year fixing it properly. A lot of people seem to think that we just need to tweak the “open file” code, but that isn’t true. So much stuff in the library is affected by Unicode file names on Windows and doing a hasty job will risk dramatically increasing our technical debt and bug count.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  load_dataset fails when filename is unicode 600268506
873213823 https://github.com/pydata/xarray/issues/5566#issuecomment-873213823 https://api.github.com/repos/pydata/xarray/issues/5566 MDEyOklzc3VlQ29tbWVudDg3MzIxMzgyMw== pnsaevik 12728107 2021-07-02T19:23:41Z 2021-07-02T19:27:59Z NONE

Thanks! I didn't manage to replicate the error outside of Travis CI, so I was clueless as to what was going on...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray fails on Travis CI 935773022
873210639 https://github.com/pydata/xarray/issues/5566#issuecomment-873210639 https://api.github.com/repos/pydata/xarray/issues/5566 MDEyOklzc3VlQ29tbWVudDg3MzIxMDYzOQ== pnsaevik 12728107 2021-07-02T19:16:29Z 2021-07-02T19:19:10Z NONE

I can't import xarray, so I can't run the command. But "pip list" returns the following:

atomicwrites 1.3.0 attrs 19.1.0 certifi 2019.3.9 importlib-metadata 0.18 mock 4.0.2 more-itertools 7.0.0 nose 1.3.7 numpy 1.21.0 packaging 19.0 pandas 1.3.0 pip 20.1.1 pipenv 2018.11.26 pluggy 0.12.0 py 1.8.0 pyparsing 2.4.0 pytest 5.4.3 python-dateutil 2.8.1 pytz 2021.1 setuptools 57.0.0 six 1.12.0 virtualenv 16.6.0 virtualenv-clone 0.5.3 wcwidth 0.1.7 wheel 0.34.2 xarray 0.18.2 zipp 0.5.1

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray fails on Travis CI 935773022
634551055 https://github.com/pydata/xarray/issues/3213#issuecomment-634551055 https://api.github.com/repos/pydata/xarray/issues/3213 MDEyOklzc3VlQ29tbWVudDYzNDU1MTA1NQ== pnsaevik 12728107 2020-05-27T09:44:55Z 2020-05-27T09:44:55Z NONE

Thanks for looking into sparse arrays for xarray. I have a use case I believe would be common:

  1. Load a netCDF file written using ragged array representation
  2. Extract a slice in either coordinate direction
  3. Store back into netCDF

At least I would love such a functionality...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How should xarray use/support sparse arrays? 479942077
614017215 https://github.com/pydata/xarray/issues/3974#issuecomment-614017215 https://api.github.com/repos/pydata/xarray/issues/3974 MDEyOklzc3VlQ29tbWVudDYxNDAxNzIxNQ== pnsaevik 12728107 2020-04-15T12:46:24Z 2020-04-15T12:46:24Z NONE

Update: The error does not seem to be present on linux:

INSTALLED VERSIONS

commit: None python: 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1062.18.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.6.1

xarray: 0.15.0 pandas: 1.0.1 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.10.1 distributed: 2.10.0 matplotlib: 3.1.3 cartopy: 0.17.0 seaborn: 0.10.0 numbagg: None setuptools: 45.2.0.post20200210 pip: 20.0.2 conda: None pytest: 5.3.5 IPython: 7.12.0 sphinx: 2.4.0

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  load_dataset fails when filename is unicode 600268506

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.849ms · About: xarray-datasette