home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "NONE" and issue = 662505658 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 4

  • michaelaye 2
  • brianmapes 1
  • mullenkamp 1
  • markusritschel 1

issue 1

  • jupyter repr caching deleted netcdf file · 5 ✖

author_association 1

  • NONE · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1267571723 https://github.com/pydata/xarray/issues/4240#issuecomment-1267571723 https://api.github.com/repos/pydata/xarray/issues/4240 IC_kwDOAMm_X85LjZwL mullenkamp 2656596 2022-10-04T20:58:37Z 2022-10-04T21:00:08Z NONE

Running xarray.backends.file_manager.FILE_CACHE.clear() fixed the issue for me. I couldn't find any other way to stop xarray from pulling up some old data from a newly saved file. I'm using the h5netcdf engine with xarray version 2022.6.0 by the way.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  jupyter repr caching deleted netcdf file 662505658
1258874354 https://github.com/pydata/xarray/issues/4240#issuecomment-1258874354 https://api.github.com/repos/pydata/xarray/issues/4240 IC_kwDOAMm_X85LCOXy brianmapes 2086210 2022-09-27T02:14:28Z 2022-09-27T02:14:28Z NONE

+1 Complicated, still vexing this user a year+ later, but it easier for me to just restart the kernel again and again than read this and #4879, which is closed but didn't seem to have succeeded if I read correctly?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  jupyter repr caching deleted netcdf file 662505658
676326130 https://github.com/pydata/xarray/issues/4240#issuecomment-676326130 https://api.github.com/repos/pydata/xarray/issues/4240 MDEyOklzc3VlQ29tbWVudDY3NjMyNjEzMA== markusritschel 3332539 2020-08-19T13:07:05Z 2020-08-19T13:07:05Z NONE

Would it be an option to consider the time stamp of the file's last change as a caching criterion?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  jupyter repr caching deleted netcdf file 662505658
663791784 https://github.com/pydata/xarray/issues/4240#issuecomment-663791784 https://api.github.com/repos/pydata/xarray/issues/4240 MDEyOklzc3VlQ29tbWVudDY2Mzc5MTc4NA== michaelaye 69774 2020-07-25T01:41:20Z 2020-07-25T01:41:20Z NONE

now i'm wondering why the caching logic is only activated by the repr? As you can see, when printed, it always updated to the status on disk?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  jupyter repr caching deleted netcdf file 662505658
663791386 https://github.com/pydata/xarray/issues/4240#issuecomment-663791386 https://api.github.com/repos/pydata/xarray/issues/4240 MDEyOklzc3VlQ29tbWVudDY2Mzc5MTM4Ng== michaelaye 69774 2020-07-25T01:37:20Z 2020-07-25T01:37:20Z NONE

is there a workaround for forcing the opening without restarting the notebook?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  jupyter repr caching deleted netcdf file 662505658

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 188.768ms · About: xarray-datasette