home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "CONTRIBUTOR", issue = 595882590 and user = 3958036 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • johnomotani · 4 ✖

issue 1

  • Releasing memory? · 4 ✖

author_association 1

  • CONTRIBUTOR · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
610414198 https://github.com/pydata/xarray/issues/3948#issuecomment-610414198 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQxNDE5OA== johnomotani 3958036 2020-04-07T14:18:36Z 2020-04-07T14:18:36Z CONTRIBUTOR

Sorry for the noise, but at least I will be glad of something to search for next time I forget how to do something like this!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610413864 https://github.com/pydata/xarray/issues/3948#issuecomment-610413864 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQxMzg2NA== johnomotani 3958036 2020-04-07T14:18:01Z 2020-04-07T14:18:01Z CONTRIBUTOR

Thanks @lanougue, @dcherian I think I see the simple answer now: use a deepcopy first, to leave the Dataset as is, then del the DataArray when finished with it, e.g. ``` da1 = ds["variable1"].copy(deep=True) ... do stuff with da1 ... del da1

da2 = ds["variable2"].copy(deep=True) ... do stuff with da2 ... del da2

... etc. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610404757 https://github.com/pydata/xarray/issues/3948#issuecomment-610404757 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwNDc1Nw== johnomotani 3958036 2020-04-07T14:01:28Z 2020-04-07T14:01:28Z CONTRIBUTOR

Thanks @lanougue, but what if I might want to re-load da1 again later, ie. if da came from some Dataset da = ds["variable"], I want to leave ds in the same state as just after I did ds = open_dataset(...)? Wouldn't del da remove "variable" from ds? Or maybe not free any memory if ds still has a reference to the DataArray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610403146 https://github.com/pydata/xarray/issues/3948#issuecomment-610403146 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwMzE0Ng== johnomotani 3958036 2020-04-07T13:58:44Z 2020-04-07T13:58:44Z CONTRIBUTOR

OK, I think I've answered my own question. Looks like dask can handle this workflow already, something like: ```

do_some_work does not call .load() or .compute() anywhere

result1a, result1b, result1c = dask.compute(do_some_work(ds["variable1"])

result2a, result2b, result2c = dask.compute(do_some_work(ds["variable2"])

... etc. ```

I do still wonder if there might be any case where .release() might be useful...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 640.58ms · About: xarray-datasette