home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "MEMBER", issue = 229474101 and user = 1197350 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • rabernat · 5 ✖

issue 1

  • concat prealigned objects · 5 ✖

author_association 1

  • MEMBER · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
315354054 https://github.com/pydata/xarray/pull/1413#issuecomment-315354054 https://api.github.com/repos/pydata/xarray/issues/1413 MDEyOklzc3VlQ29tbWVudDMxNTM1NDA1NA== rabernat 1197350 2017-07-14T13:01:45Z 2017-07-14T13:02:20Z MEMBER

Yes, I think it should be closed. There are better ways to accomplish the desired goals.

Specifically, allowing the user to pass kwargs to concat via open_mfdataset would be useful.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat prealigned objects 229474101
302843502 https://github.com/pydata/xarray/pull/1413#issuecomment-302843502 https://api.github.com/repos/pydata/xarray/issues/1413 MDEyOklzc3VlQ29tbWVudDMwMjg0MzUwMg== rabernat 1197350 2017-05-20T01:51:03Z 2017-05-20T01:51:03Z MEMBER

Since the expensive part (for me) is actually reading all the coordinates, I'm not sure that this PR makes sense any more.

The same thing I am going for here could probably be accomplished by allowing the user to pass join='exact' via open_mfdataset. A related optimization would be to allow the user to pass coords='minimal' (or other concat coords options) via open_mfdataset.

For really big datasets, I think we will want to go the NCML approach, generating the xarray metadata as a pre-processing step. Then we could add a function like open_ncml_dataset to xarray which would parse this metadata and construct the dataset in a more efficient way (i.e. not reading redundant coordinates).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat prealigned objects 229474101
302724756 https://github.com/pydata/xarray/pull/1413#issuecomment-302724756 https://api.github.com/repos/pydata/xarray/issues/1413 MDEyOklzc3VlQ29tbWVudDMwMjcyNDc1Ng== rabernat 1197350 2017-05-19T14:53:49Z 2017-05-19T14:53:49Z MEMBER

As I think about this further, I realize it might be futile to avoid reading the dimensions from all the files. This is a basic part of how open_dataset works.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat prealigned objects 229474101
302576832 https://github.com/pydata/xarray/pull/1413#issuecomment-302576832 https://api.github.com/repos/pydata/xarray/issues/1413 MDEyOklzc3VlQ29tbWVudDMwMjU3NjgzMg== rabernat 1197350 2017-05-19T00:30:13Z 2017-05-19T00:30:28Z MEMBER

Given a collection of datasets, how do I know if setting prealigned=True will work?

I guess we would want to check that (a) the necessary variables and dimensions exist in all datasets and (b) the dimensions have the same length. We would want to bypass the actual reading of the indices. I agree it would be nicer to subsume this logic into align.

What is xr.align(..., join='exact') supposed to do?

What happens if things go wrong?

I can add more careful checks once we sort out the align question.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat prealigned objects 229474101
302496987 https://github.com/pydata/xarray/pull/1413#issuecomment-302496987 https://api.github.com/repos/pydata/xarray/issues/1413 MDEyOklzc3VlQ29tbWVudDMwMjQ5Njk4Nw== rabernat 1197350 2017-05-18T18:14:56Z 2017-05-18T18:15:34Z MEMBER

Let me expand on what this does.

Many netCDF datasets consist of multiple files with identical coordinates, except for one (e.g. time). With xarray we can open these datasets with open_mfdataset, which calls concat on the list of individual dataset objects. concat calls align, which loads all of the dimension indices (and, optionally, non-dimension coordinates) from each file and checks them for consistency / alignment.

This align step is potentially quite expensive for big collections of files with large indices. For example, an unstructured grid or particle-based dataset would just have a single dimension coordinate, with the same length as the data variables. If the user knows that the datasets are already aligned, this PR enables the alignment step to be skipped by passing the argument prealigned=True to concat. My goal is to avoid touching the disk as much as possible.

This PR is a draft in progress. I still need to propagate the prealigned argument up to auto_combine and open_mfdataset.

An alternative API would be to add another option to the coords keywork, i.e. coords='prealigned'.

Feedback welcome.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat prealigned objects 229474101

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 42.236ms · About: xarray-datasette