home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where author_association = "MEMBER" and issue = 278286073 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • shoyer 5
  • fmaussion 1

issue 1

  • xarray to and from Iris · 6 ✖

author_association 1

  • MEMBER · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
352518249 https://github.com/pydata/xarray/pull/1750#issuecomment-352518249 https://api.github.com/repos/pydata/xarray/issues/1750 MDEyOklzc3VlQ29tbWVudDM1MjUxODI0OQ== shoyer 1217238 2017-12-18T18:34:51Z 2017-12-18T18:34:51Z MEMBER

@duncanwp that example looks good to me. I assume it runs locally with this branch?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray to and from Iris 278286073
352480739 https://github.com/pydata/xarray/pull/1750#issuecomment-352480739 https://api.github.com/repos/pydata/xarray/issues/1750 MDEyOklzc3VlQ29tbWVudDM1MjQ4MDczOQ== shoyer 1217238 2017-12-18T16:35:06Z 2017-12-18T16:35:06Z MEMBER

https://github.com/dask/dask/issues/2977 needs to be resolved to make it possible to properly translate dask arrays back and forth between xarray/Iris.

I don't want to hold up this PR which is coming along very nicely (and I'm sure would already be useful), so we can we simply differ handling of dask arrays for now? I would suggest raising NotImplementedError and printing an error message with a link to a follow-up issue in xarray's issue tracker.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray to and from Iris 278286073
350033913 https://github.com/pydata/xarray/pull/1750#issuecomment-350033913 https://api.github.com/repos/pydata/xarray/issues/1750 MDEyOklzc3VlQ29tbWVudDM1MDAzMzkxMw== shoyer 1217238 2017-12-07T17:13:49Z 2017-12-07T17:13:49Z MEMBER

I don't know exactly what would go wrong, but I'm pretty sure masked Dask arrays would break xarray in some subtle ways. It would be better to convert them to using unmasked Dask arrays using NaN. On Thu, Dec 7, 2017 at 1:18 AM Duncan Watson-Parris notifications@github.com wrote:

@duncanwp commented on this pull request.

In xarray/convert.py https://github.com/pydata/xarray/pull/1750#discussion_r155465954:

@@ -181,7 +183,9 @@ def from_iris(cube): cell_methods = _iris_cell_methods_to_str(cube.cell_methods) if cell_methods: array_attrs['cell_methods'] = cell_methods - dataarray = DataArray(cube.data, coords=coords, name=name, + + cube_data = ma.filled(cube.core_data(), get_fill_value(cube.dtype)) if hasattr(cube, 'core_data') else cube.data

OK, I hadn't appreciated that dask wasn't a hard requirement. I could leave them as whatever type they're currently stored in - how would xarray cope with a dask masked array?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/pull/1750#discussion_r155465954, or mute the thread https://github.com/notifications/unsubscribe-auth/ABKS1rsh70WxzNY5hvo2S09YQf3bfUMXks5s961LgaJpZM4QxRZq .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray to and from Iris 278286073
348559234 https://github.com/pydata/xarray/pull/1750#issuecomment-348559234 https://api.github.com/repos/pydata/xarray/issues/1750 MDEyOklzc3VlQ29tbWVudDM0ODU1OTIzNA== shoyer 1217238 2017-12-01T17:38:42Z 2017-12-01T17:38:42Z MEMBER

I think it's OK in I/O (for now). This is also the place where we document serialization in pickle, and the from_dict methods.

Agreed. In the long term, the IO docs have gotten pretty long, it could make sense to split them up into several subpages.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray to and from Iris 278286073
348489979 https://github.com/pydata/xarray/pull/1750#issuecomment-348489979 https://api.github.com/repos/pydata/xarray/issues/1750 MDEyOklzc3VlQ29tbWVudDM0ODQ4OTk3OQ== fmaussion 10050469 2017-12-01T13:05:59Z 2017-12-01T13:05:59Z MEMBER

Do you have a feel for if/where else this should be documented? I'm not sure if really fits in I/O...

I think it's OK in I/O (for now). This is also the place where we document serialization in pickle, and the from_dict methods.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray to and from Iris 278286073
348387554 https://github.com/pydata/xarray/pull/1750#issuecomment-348387554 https://api.github.com/repos/pydata/xarray/issues/1750 MDEyOklzc3VlQ29tbWVudDM0ODM4NzU1NA== shoyer 1217238 2017-12-01T02:55:41Z 2017-12-01T02:55:41Z MEMBER

@pelson any thoughts here?

I think this may be a reasonable place to start, though it is certainly not leveraging Iris's full metadata decoding capabilities.

It would also be nice to pass dask arrays back and forth, but that will probably need to wait until after Iris 2.0 and https://github.com/pydata/xarray/issues/1372 is solved.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray to and from Iris 278286073

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.027ms · About: xarray-datasette