home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where issue = 595882590 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • johnomotani 4
  • dcherian 1
  • lanougue 1

author_association 3

  • CONTRIBUTOR 4
  • MEMBER 1
  • NONE 1

issue 1

  • Releasing memory? · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
610414198 https://github.com/pydata/xarray/issues/3948#issuecomment-610414198 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQxNDE5OA== johnomotani 3958036 2020-04-07T14:18:36Z 2020-04-07T14:18:36Z CONTRIBUTOR

Sorry for the noise, but at least I will be glad of something to search for next time I forget how to do something like this!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610413864 https://github.com/pydata/xarray/issues/3948#issuecomment-610413864 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQxMzg2NA== johnomotani 3958036 2020-04-07T14:18:01Z 2020-04-07T14:18:01Z CONTRIBUTOR

Thanks @lanougue, @dcherian I think I see the simple answer now: use a deepcopy first, to leave the Dataset as is, then del the DataArray when finished with it, e.g. ``` da1 = ds["variable1"].copy(deep=True) ... do stuff with da1 ... del da1

da2 = ds["variable2"].copy(deep=True) ... do stuff with da2 ... del da2

... etc. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610405543 https://github.com/pydata/xarray/issues/3948#issuecomment-610405543 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwNTU0Mw== dcherian 2448579 2020-04-07T14:02:56Z 2020-04-07T14:02:56Z MEMBER

Afaik "release" happens during garbage collection, so deleting things with del is the way to go.

I have also been using cluster.restart() to start from scratch

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610404757 https://github.com/pydata/xarray/issues/3948#issuecomment-610404757 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwNDc1Nw== johnomotani 3958036 2020-04-07T14:01:28Z 2020-04-07T14:01:28Z CONTRIBUTOR

Thanks @lanougue, but what if I might want to re-load da1 again later, ie. if da came from some Dataset da = ds["variable"], I want to leave ds in the same state as just after I did ds = open_dataset(...)? Wouldn't del da remove "variable" from ds? Or maybe not free any memory if ds still has a reference to the DataArray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610403146 https://github.com/pydata/xarray/issues/3948#issuecomment-610403146 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwMzE0Ng== johnomotani 3958036 2020-04-07T13:58:44Z 2020-04-07T13:58:44Z CONTRIBUTOR

OK, I think I've answered my own question. Looks like dask can handle this workflow already, something like: ```

do_some_work does not call .load() or .compute() anywhere

result1a, result1b, result1c = dask.compute(do_some_work(ds["variable1"])

result2a, result2b, result2c = dask.compute(do_some_work(ds["variable2"])

... etc. ```

I do still wonder if there might be any case where .release() might be useful...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590
610402170 https://github.com/pydata/xarray/issues/3948#issuecomment-610402170 https://api.github.com/repos/pydata/xarray/issues/3948 MDEyOklzc3VlQ29tbWVudDYxMDQwMjE3MA== lanougue 32069530 2020-04-07T13:57:04Z 2020-04-07T13:57:04Z NONE

Hi, If results1 is already evaluated, just replace "da1.release()" with "del da1". Python should automatically release the memory

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Releasing memory? 595882590

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.036ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows