home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where author_association = "MEMBER" and issue = 181033674 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • shoyer 8
  • fmaussion 4

issue 1

  • Attributes from netCDF4 intialization retained · 12 ✖

author_association 1

  • MEMBER · 12 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
290604158 https://github.com/pydata/xarray/pull/1038#issuecomment-290604158 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI5MDYwNDE1OA== shoyer 1217238 2017-03-31T03:11:00Z 2017-03-31T03:11:00Z MEMBER

OK, going to merge this anyways... the failing tests will be fixed by #1366

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
289114272 https://github.com/pydata/xarray/pull/1038#issuecomment-289114272 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4OTExNDI3Mg== shoyer 1217238 2017-03-24T18:55:41Z 2017-03-24T18:55:41Z MEMBER

Just restarted, let's see...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
289113931 https://github.com/pydata/xarray/pull/1038#issuecomment-289113931 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4OTExMzkzMQ== shoyer 1217238 2017-03-24T18:54:24Z 2017-03-24T18:54:24Z MEMBER

Travis is a shared environment that runs multiple tests concurrently. It's possible that we're running out of files due to other users or even other variants of our same build.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
289109287 https://github.com/pydata/xarray/pull/1038#issuecomment-289109287 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4OTEwOTI4Nw== shoyer 1217238 2017-03-24T18:35:56Z 2017-03-24T18:35:56Z MEMBER

@pwolfram if we're getting sporadic failures on Travis, it's probably better to skip the test by default. It's important for the test suite not be flakey.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
289108562 https://github.com/pydata/xarray/pull/1038#issuecomment-289108562 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4OTEwODU2Mg== fmaussion 10050469 2017-03-24T18:33:01Z 2017-03-24T18:33:01Z MEMBER

Yes, it also happened on this PR: https://github.com/pydata/xarray/pull/1328

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
289108229 https://github.com/pydata/xarray/pull/1038#issuecomment-289108229 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4OTEwODIyOQ== shoyer 1217238 2017-03-24T18:31:36Z 2017-03-24T18:31:36Z MEMBER

It looks like one of the new many files tests is crashing:

xarray/tests/test_backends.py::OpenMFDatasetManyFilesTest::test_3_open_large_num_files_pynio /home/travis/build.sh: line 62: 1561 Segmentation fault (core dumped) py.test xarray --cov=xarray --cov-report term-missing --verbose

https://travis-ci.org/pydata/xarray/jobs/214722901

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
288456001 https://github.com/pydata/xarray/pull/1038#issuecomment-288456001 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4ODQ1NjAwMQ== shoyer 1217238 2017-03-22T16:26:02Z 2017-03-22T16:26:02Z MEMBER

Yes, this works for me. Can you add a test case that covers this?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
288434764 https://github.com/pydata/xarray/pull/1038#issuecomment-288434764 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4ODQzNDc2NA== fmaussion 10050469 2017-03-22T15:25:45Z 2017-03-22T15:25:45Z MEMBER

Note, I would say that open_mfdataset is no longer experimental because of its widespread use.

Yes, I also recently updated the IO docs in this respect and removed the experimental part: http://xarray.pydata.org/en/latest/io.html#id6

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
288425486 https://github.com/pydata/xarray/pull/1038#issuecomment-288425486 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI4ODQyNTQ4Ng== fmaussion 10050469 2017-03-22T14:57:54Z 2017-03-22T14:57:54Z MEMBER

Yes, that's good for me. I would mention it somewhere in the docstring though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
267683615 https://github.com/pydata/xarray/pull/1038#issuecomment-267683615 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI2NzY4MzYxNQ== fmaussion 10050469 2016-12-16T20:03:10Z 2016-12-16T20:03:10Z MEMBER

AFAIC I'd be happy with a combined.attrs = datasets[0].attrs added before returning the combined dataset which is already better than the current situation...

Do you have time to get back to this @pwolfram ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
251715380 https://github.com/pydata/xarray/pull/1038#issuecomment-251715380 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI1MTcxNTM4MA== shoyer 1217238 2016-10-05T15:50:06Z 2016-10-05T15:50:06Z MEMBER

I did some more digging and see some of the potential issues because some of the concatenation / merging is done quasi-automatically, which reduces the number of objects that must be merged (e.g., https://github.com/pydata/xarray/blob/master/xarray/core/combine.py#L391). I'm assuming this is done for performance / simplicity. Is that true?

We have two primitive combine operations, concat (same variables, different coordinate values) and merge (different variables, same coordinate values). auto_combine needs to do both in some order.

You're right that the order of grouped is not deterministic (it uses a dict). Sorting by key for input into the list comprehension could fix that.

The comprehensive fix would be to pick a merge strategy for attributes, and apply it uniformly in each place where xarray merges variables or datasets (basically, in concat and all the merge variations). Possibly several merge strategies, with a keyword argument to switch between them.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674
251547619 https://github.com/pydata/xarray/pull/1038#issuecomment-251547619 https://api.github.com/repos/pydata/xarray/issues/1038 MDEyOklzc3VlQ29tbWVudDI1MTU0NzYxOQ== shoyer 1217238 2016-10-05T00:00:30Z 2016-10-05T00:00:30Z MEMBER

Merge logic for attributes opens a whole big can of worms. I would probably just copy attributes from the first dataset (similar to what we do in concat), unless you want to overhaul the whole thing in a more comprehensive fashion.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes from netCDF4 intialization retained 181033674

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.904ms · About: xarray-datasette