home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

9 rows where issue = 1657396474 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 4

  • dcherian 3
  • spencerkclark 3
  • keewis 2
  • jsignell 1

author_association 2

  • MEMBER 8
  • CONTRIBUTOR 1

issue 1

  • Continue to use nanosecond-precision Timestamps in precision-sensitive areas · 9 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1507124290 https://github.com/pydata/xarray/pull/7731#issuecomment-1507124290 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85Z1ORC dcherian 2448579 2023-04-13T14:58:20Z 2023-04-13T14:58:20Z MEMBER

Thanks for patiently working through this Spencer. I'll merge now, and then we can release tomorrow.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1507109914 https://github.com/pydata/xarray/pull/7731#issuecomment-1507109914 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85Z1Kwa spencerkclark 6628425 2023-04-13T14:50:15Z 2023-04-13T14:50:15Z MEMBER

Thanks for noting that @dcherian -- I think I got to all of them now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1506285455 https://github.com/pydata/xarray/pull/7731#issuecomment-1506285455 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZyBeP dcherian 2448579 2023-04-13T03:35:01Z 2023-04-13T03:35:01Z MEMBER

There are a bunch of warnings in the tests that could be silenced: D:\a\xarray\xarray\xarray\tests\test_dataset.py:516: UserWarning: Converting non-nanosecond precision datetime values to nanosecond precision. This behavior can eventually be relaxed in xarray, as it is an artifact from pandas which is now beginning to support non-nanosecond precision values. This warning is caused by passing non-nanosecond np.datetime64 or np.timedelta64 values to the DataArray or Variable constructor; it can be silenced by converting the values to nanosecond precision ahead of time.

But we can also just merge quickly to get a release out

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1506059299 https://github.com/pydata/xarray/pull/7731#issuecomment-1506059299 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZxKQj spencerkclark 6628425 2023-04-12T22:42:26Z 2023-04-12T22:42:26Z MEMBER

Thanks all for the help! Fingers crossed things should be all green now. Happy to address any more review comments.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1505399396 https://github.com/pydata/xarray/pull/7731#issuecomment-1505399396 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZupJk dcherian 2448579 2023-04-12T14:41:02Z 2023-04-12T14:41:17Z MEMBER

RTD failures are real: WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_all_dates. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_all_dates' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_mixed. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_mixed' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_monotonic. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_monotonic' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_type_compatible. Possible hints: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_type_compatible' * ImportError: * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.set_value. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'set_value' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.to_native_types. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'to_native_types' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex'

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1499107154 https://github.com/pydata/xarray/pull/7731#issuecomment-1499107154 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZWo9S spencerkclark 6628425 2023-04-06T13:57:56Z 2023-04-06T13:57:56Z MEMBER

I'm fine waiting until #7724 is merged to let our main CI cover this. Indeed the upstream tests are flaky. Locally I just installed pandas 2 via pip to do testing during development.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1499092482 https://github.com/pydata/xarray/pull/7731#issuecomment-1499092482 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZWlYC keewis 14808389 2023-04-06T13:48:06Z 2023-04-06T13:48:06Z MEMBER

sorry, I forgot about being able to use the upstream-dev environment for testing. That should solve just fine since by design it ignores all pinned dependencies, but we currently get occasional segfaults and unrelated failing tests, and we only test on a single OS / python version.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1499079927 https://github.com/pydata/xarray/pull/7731#issuecomment-1499079927 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZWiT3 jsignell 4806877 2023-04-06T13:38:38Z 2023-04-06T13:38:38Z CONTRIBUTOR

to properly test this, I guess we'd need to merge #7724 first?

Otherwise the env in the upstream test will never solve right?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1499063414 https://github.com/pydata/xarray/pull/7731#issuecomment-1499063414 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZWeR2 keewis 14808389 2023-04-06T13:26:22Z 2023-04-06T13:26:22Z MEMBER

to properly test this, I guess we'd need to merge #7724 first?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1276.742ms · About: xarray-datasette