home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "MEMBER", issue = 295270362 and user = 306380 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • mrocklin · 6 ✖

issue 1

  • Avoid Adapters in task graphs? · 6 ✖

author_association 1

  • MEMBER · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
371813468 https://github.com/pydata/xarray/issues/1895#issuecomment-371813468 https://api.github.com/repos/pydata/xarray/issues/1895 MDEyOklzc3VlQ29tbWVudDM3MTgxMzQ2OA== mrocklin 306380 2018-03-09T13:35:38Z 2018-03-09T13:35:38Z MEMBER

If things are operational then we're fine. It may be that a lot of this cost was due to other serialization things in gcsfs, zarr, or other.

On Fri, Mar 9, 2018 at 12:33 AM, Joe Hamman notifications@github.com wrote:

Where did we land here? Is there an action item that came from this discussion?

In my view, the benefit of having consistent getitem behavior for all of our backends is worth working through potential hiccups in the way dask interacts with xarray.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/1895#issuecomment-371718136, or mute the thread https://github.com/notifications/unsubscribe-auth/AASszGISdLyCz1vL3SwpdNv8CplC5hi1ks5tchQNgaJpZM4R9Svr .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid Adapters in task graphs? 295270362
363936464 https://github.com/pydata/xarray/issues/1895#issuecomment-363936464 https://api.github.com/repos/pydata/xarray/issues/1895 MDEyOklzc3VlQ29tbWVudDM2MzkzNjQ2NA== mrocklin 306380 2018-02-07T22:42:40Z 2018-02-07T22:42:40Z MEMBER

Well, presumably opening a zarr file requires a small amount of IO to read out the metadata.

Ah, this may actually require a non-trivial amount of IO. It currently takes a non-trivial amount of time to read a zarr file. See https://github.com/pangeo-data/pangeo/issues/99#issuecomment-363782191 . We're doing this on each deserialization?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid Adapters in task graphs? 295270362
363932105 https://github.com/pydata/xarray/issues/1895#issuecomment-363932105 https://api.github.com/repos/pydata/xarray/issues/1895 MDEyOklzc3VlQ29tbWVudDM2MzkzMjEwNQ== mrocklin 306380 2018-02-07T22:25:45Z 2018-02-07T22:25:45Z MEMBER

No, not particularly, though potentially opening a zarr store could be a little expensive

What makes it expensive?

I'm mostly not sure how this would be done. Currently, we open files, create array objects, do some lazy decoding and then create dask arrays with from_array.

Maybe we add an option to from_array to have it inline the array into the task, rather than create an explicit dependency.

This does feel like I'm trying to duct tape over some underlying problem that I can't resolve though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid Adapters in task graphs? 295270362
363925208 https://github.com/pydata/xarray/issues/1895#issuecomment-363925208 https://api.github.com/repos/pydata/xarray/issues/1895 MDEyOklzc3VlQ29tbWVudDM2MzkyNTIwOA== mrocklin 306380 2018-02-07T21:59:56Z 2018-02-07T21:59:56Z MEMBER

Any concerns about recreating these objects for every access?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid Adapters in task graphs? 295270362
363925086 https://github.com/pydata/xarray/issues/1895#issuecomment-363925086 https://api.github.com/repos/pydata/xarray/issues/1895 MDEyOklzc3VlQ29tbWVudDM2MzkyNTA4Ng== mrocklin 306380 2018-02-07T21:59:28Z 2018-02-07T21:59:28Z MEMBER

Do these objects happen to store any cached results? I'm seeing odd performance issues around these objects and am curious about any ways in which they might be fancy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid Adapters in task graphs? 295270362
363889835 https://github.com/pydata/xarray/issues/1895#issuecomment-363889835 https://api.github.com/repos/pydata/xarray/issues/1895 MDEyOklzc3VlQ29tbWVudDM2Mzg4OTgzNQ== mrocklin 306380 2018-02-07T19:52:18Z 2018-02-07T19:52:18Z MEMBER

https://github.com/pangeo-data/pangeo/issues/99#issuecomment-363852820

also cc @jhamman

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid Adapters in task graphs? 295270362

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 25.804ms · About: xarray-datasette