home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

11 rows where user = 3217406 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 4

  • Set a default _FillValue of NaN for float types 4
  • Use xarray.open_dataset() for password-protected Opendap files 3
  • Implementing dask.array.coarsen in xarrays 3
  • User warning / more transparent _FillValue interface for .to_netcdf() 1

user 1

  • laliberte · 11 ✖

author_association 1

  • CONTRIBUTOR 11
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
305775273 https://github.com/pydata/xarray/issues/1068#issuecomment-305775273 https://api.github.com/repos/pydata/xarray/issues/1068 MDEyOklzc3VlQ29tbWVudDMwNTc3NTI3Mw== laliberte 3217406 2017-06-02T12:44:23Z 2017-06-02T13:00:46Z CONTRIBUTOR

@jenfly and @shoyer pydap version 3.2.2 (newly released last week) should have fixed this issue. Could you verify?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use xarray.open_dataset() for password-protected Opendap files 186169975
305176003 https://github.com/pydata/xarray/issues/1192#issuecomment-305176003 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDMwNTE3NjAwMw== laliberte 3217406 2017-05-31T12:45:18Z 2017-05-31T12:45:18Z CONTRIBUTOR

The reason I ask is that, ideally, coarsen would work exactly the same with dask.array and np.ndarray data. By using both serial and parallel coarsen methods from dask, we are adding a dependency but we are ensuring forward compatibility. @shoyer, what's your preference? (1) replicate serial coarsen into xarray or (2) point to dask coarsen methods?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
305169201 https://github.com/pydata/xarray/issues/1192#issuecomment-305169201 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDMwNTE2OTIwMQ== laliberte 3217406 2017-05-31T12:00:11Z 2017-05-31T12:00:11Z CONTRIBUTOR

If it's part of dask then it would be almost trivial to implement in xarray. @mrocklin Can we assume that dask/array/chunk.py::coarsen is part of the public API?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
291262663 https://github.com/pydata/xarray/issues/1068#issuecomment-291262663 https://api.github.com/repos/pydata/xarray/issues/1068 MDEyOklzc3VlQ29tbWVudDI5MTI2MjY2Mw== laliberte 3217406 2017-04-03T20:24:07Z 2017-04-03T20:24:07Z CONTRIBUTOR

@shoyer @jenfly: Good news, I think I was able to track down the bug in pydap that was preventing compatibility. I'm putting a PR together and we could expect it to be merged pretty soon into the master. I wanted to give you a heads up so that you don't waste more time on this.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use xarray.open_dataset() for password-protected Opendap files 186169975
290189772 https://github.com/pydata/xarray/issues/1068#issuecomment-290189772 https://api.github.com/repos/pydata/xarray/issues/1068 MDEyOklzc3VlQ29tbWVudDI5MDE4OTc3Mg== laliberte 3217406 2017-03-29T18:54:09Z 2017-03-29T18:54:09Z CONTRIBUTOR

I like the idea of passing PydapDataStore objects that include the session object. It seems more likely to be forward compatible, especially if Central Authentication Services multiply (as one would expect) with different authentication mechanisms.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use xarray.open_dataset() for password-protected Opendap files 186169975
270439515 https://github.com/pydata/xarray/issues/1192#issuecomment-270439515 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDI3MDQzOTUxNQ== laliberte 3217406 2017-01-04T17:59:08Z 2017-01-04T17:59:08Z CONTRIBUTOR

The dask implementation has the following API: dask.array.coarsen(reduction, x, axes, trim_excess=False) so a proposed xarray API could look like: xarray.coarsen(reduction, x, axes, chunks=None, trim_excess=False), resulting in the following implementation: 1. If the underlying data to x is dask.array, yields x.chunks(chunks).array.coarsen(reduction, axes, trim_excess) 2. Else, copy the block_reduce function.

Does that fit with the xarray API?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
267492795 https://github.com/pydata/xarray/pull/1165#issuecomment-267492795 https://api.github.com/repos/pydata/xarray/issues/1165 MDEyOklzc3VlQ29tbWVudDI2NzQ5Mjc5NQ== laliberte 3217406 2016-12-16T01:21:30Z 2016-12-16T01:21:30Z CONTRIBUTOR

@shoyer Let me know if you need any more changes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Set a default _FillValue of NaN for float types 195832230
267392582 https://github.com/pydata/xarray/pull/1165#issuecomment-267392582 https://api.github.com/repos/pydata/xarray/issues/1165 MDEyOklzc3VlQ29tbWVudDI2NzM5MjU4Mg== laliberte 3217406 2016-12-15T17:43:21Z 2016-12-15T17:43:21Z CONTRIBUTOR

So I've tried with conventions.py and it works. However, allowing a NaN _FillValue for complex floats raises an exception in h5py: h5py/h5py#805. This means that until this is fixed in h5py we should restrict this default behaviour to real float types.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Set a default _FillValue of NaN for float types 195832230
267371351 https://github.com/pydata/xarray/pull/1165#issuecomment-267371351 https://api.github.com/repos/pydata/xarray/issues/1165 MDEyOklzc3VlQ29tbWVudDI2NzM3MTM1MQ== laliberte 3217406 2016-12-15T16:23:41Z 2016-12-15T16:23:41Z CONTRIBUTOR

I know but the alternative would be to scan the data before deciding but that would kind of break the whole dask integration.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Set a default _FillValue of NaN for float types 195832230
267366770 https://github.com/pydata/xarray/pull/1165#issuecomment-267366770 https://api.github.com/repos/pydata/xarray/issues/1165 MDEyOklzc3VlQ29tbWVudDI2NzM2Njc3MA== laliberte 3217406 2016-12-15T16:07:45Z 2016-12-15T16:07:45Z CONTRIBUTOR

The only problem I see is that in the current implementation this fix puts a _FillValue attribute even if there are no NaN in the variable. Usually, cdo would not do that. I guess there are little risks but I think it's important that this be said. Alternative implementations would be way too cumbersome so I think the approach used in this PR is the lesser of two evils.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Set a default _FillValue of NaN for float types 195832230
267094196 https://github.com/pydata/xarray/issues/1163#issuecomment-267094196 https://api.github.com/repos/pydata/xarray/issues/1163 MDEyOklzc3VlQ29tbWVudDI2NzA5NDE5Ng== laliberte 3217406 2016-12-14T17:13:05Z 2016-12-14T17:13:05Z CONTRIBUTOR

I'm pretty sure that of over many versions of CDO NaNs are not properly handled. For example, all the setmiss* commands have a very hard time with NaN's.

I have been using the encoding option but I find it not very intuitive and kind of cumbersome when there are many variables.

I think you're right that actually putting _FillValue = NaN in the variable attribute might be enough to solve most issues with third parties. If it still does not work, then it would be reasonable to raise this issue with each one of the third parties.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  User warning / more transparent _FillValue interface for .to_netcdf() 195576963

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.419ms · About: xarray-datasette