home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where user = 1445602 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 5

  • Implementing dask.array.coarsen in xarrays 2
  • API for multi-dimensional resampling/regridding 1
  • Fixes OS error arising from too many files open 1
  • Add RasterIO backend 1
  • Center the coordinates to pixels for rasterio backend 1

user 1

  • PeterDSteinberg · 6 ✖

author_association 1

  • NONE 6
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
313232313 https://github.com/pydata/xarray/pull/1468#issuecomment-313232313 https://api.github.com/repos/pydata/xarray/issues/1468 MDEyOklzc3VlQ29tbWVudDMxMzIzMjMxMw== PeterDSteinberg 1445602 2017-07-05T21:29:31Z 2017-07-05T21:29:31Z NONE

+1 from me as well.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Center the coordinates to pixels for rasterio backend 239636285
306546430 https://github.com/pydata/xarray/pull/1260#issuecomment-306546430 https://api.github.com/repos/pydata/xarray/issues/1260 MDEyOklzc3VlQ29tbWVudDMwNjU0NjQzMA== PeterDSteinberg 1445602 2017-06-06T16:44:43Z 2017-06-06T16:44:43Z NONE

+1 @fmaussion - Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add RasterIO backend 206905158
305189172 https://github.com/pydata/xarray/issues/1192#issuecomment-305189172 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDMwNTE4OTE3Mg== PeterDSteinberg 1445602 2017-05-31T13:38:40Z 2017-05-31T13:39:22Z NONE

Hi @darothen earthio is a recent experimental refactor of what was the elm.readers subpackage. elm - Ensemble Learning Models was developed with a Phase I NASA SBIR in 2016 and in part reflects our thinking in late 2015 when xarray was newer and we were planning the proposal. In the last ca. month we have started a Phase II of development on multi-model dask/xarray ML algorithms based on xarray, dask, scikit-learn and a Bokeh maps UI for tasks like land cover classification. I'll add you to elm and feel free to contact me at psteinberg [at] continuum [dot] io. We will do more promotion / blogs in the near term and also in about 12 months we will release a free/open collection of notebooks that form a "Machine Learning with Environmental Data" 3-day course.

Back to the subject matter of the thread.... You can assign the issue to me (can you add me also to xarray repo so I can assign myself things?).. I'll wait to get started until after @shoyer comments on @laliberte 's question:

(1) replicate serial coarsen into xarray or (2) point to dask coarsen methods?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
305033710 https://github.com/pydata/xarray/issues/486#issuecomment-305033710 https://api.github.com/repos/pydata/xarray/issues/486 MDEyOklzc3VlQ29tbWVudDMwNTAzMzcxMA== PeterDSteinberg 1445602 2017-05-30T23:05:49Z 2017-05-30T23:05:49Z NONE

Regridding is of interest to NASA and other clients of ours. It is important to them to be able to do broadcast operations between rasters that differ in resolution or are otherwise offset. We'll follow the XMap repo mentioned above ( @jhamman ) and see about building on that style. Our clients and open source tools like datashader for viz and elm for ML could use XMap and benefit from coordinate transformations and regridding. We have a meeting internally to discuss approaches to the coordinates' metadata and resampling / regridding and I'll be in touch further soon about how we can help here (see also the issues on this experimental earthio repo).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  API for multi-dimensional resampling/regridding 96211612
305028421 https://github.com/pydata/xarray/issues/1192#issuecomment-305028421 https://api.github.com/repos/pydata/xarray/issues/1192 MDEyOklzc3VlQ29tbWVudDMwNTAyODQyMQ== PeterDSteinberg 1445602 2017-05-30T22:36:15Z 2017-05-30T22:36:15Z NONE

Hello @laliberte @shoyer @jhamman . I'm with Continuum and working on NASA funded Earth science ML (see ensemble learning models in github and its documentation here as well as earthio, an experimental repo we have discussed simplifying and transitioning to Xarray - earthio issue 12 and earthio issue 13). We (Continuum NASA, dask, and datashader team members) met with @rabernat this morning and discussed ideas for collaboration better with the Xarray team. I'll comment on more issues more regularly and make some experimental PRs over the next month. I'd like to keep most of the discussion on github issues so it is of general utility, but I'm happy to chat anytime if you want to talk further detail on longer term goals with Xarray.

We can submit a PR on this issue for dask's coarsen and the specs above for using block_reduce in some situations. We have a variety of tasks we are covering now and in a planning / architecture phase for NASA. If we are too slow to respond to this issue, feel free to ping me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implementing dask.array.coarsen in xarrays 198742089
275251977 https://github.com/pydata/xarray/pull/1198#issuecomment-275251977 https://api.github.com/repos/pydata/xarray/issues/1198 MDEyOklzc3VlQ29tbWVudDI3NTI1MTk3Nw== PeterDSteinberg 1445602 2017-01-25T22:22:52Z 2017-01-25T22:22:52Z NONE

I appreciate your work on this too-many-files-open error - I think your fixes will add a lot of value to the NetCDF multi-file functionality. In this notebook using K-Means clustering on multi-file NetCDF data sets I have repeatedly experienced the too-many-open files error, even with attempts to adjust via ulimit. I can test out the notebook again as this PR is finalized.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixes OS error arising from too many files open 199900056

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.767ms · About: xarray-datasette