home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "MEMBER", issue = 168470276 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • crusaderky · 5 ✖

issue 1

  • align() and broadcast() before concat() · 5 ✖

author_association 1

  • MEMBER · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
239289914 https://github.com/pydata/xarray/issues/927#issuecomment-239289914 https://api.github.com/repos/pydata/xarray/issues/927 MDEyOklzc3VlQ29tbWVudDIzOTI4OTkxNA== crusaderky 6213168 2016-08-11T20:57:00Z 2016-08-11T20:57:00Z MEMBER

Finished first complete part, ready for merging: https://github.com/pydata/xarray/pull/963

Note: there is a failed unit test, TestDataset.test_broadcast_nocopy, which shows broadcast on dataset doing a data copy whereas it shouldn't. Could you look into it?

I'm now moving to auto-calling broadcast inside concat()...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  align() and broadcast() before concat() 168470276
238081972 https://github.com/pydata/xarray/issues/927#issuecomment-238081972 https://api.github.com/repos/pydata/xarray/issues/927 MDEyOklzc3VlQ29tbWVudDIzODA4MTk3Mg== crusaderky 6213168 2016-08-07T13:19:12Z 2016-08-07T13:19:12Z MEMBER

Started working on it. Still a lot of cleanup to be done. https://github.com/crusaderky/xarray/commit/8830437865c43e472101ca91a12f714ee43546cc

Observations: - I can't put any sense in the skip_single_target hack or in the whole special treatment inside align() for when there is only one arg. What's the benefit of the whole thing? What would happen if we simply removed the special logic? - DataArray.reindex(copy=False) still performs a copy, even if there's nothing to do. I'm a bit afraid to go and fix it right now as I don't want to trigger domino effects - I'm experiencing a lot of grief because assertDatasetIdentical expects both the coords and the data vars to have the same order, which in some situations it's simply impossible to control without touching vast areas of your code. As a more general and fundamental point, I cannot understand what's the benefit of using OrderedDict instead of a plain dict for coords, attrs, and Dataset.data_vars?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  align() and broadcast() before concat() 168470276
236822408 https://github.com/pydata/xarray/issues/927#issuecomment-236822408 https://api.github.com/repos/pydata/xarray/issues/927 MDEyOklzc3VlQ29tbWVudDIzNjgyMjQwOA== crusaderky 6213168 2016-08-02T07:20:13Z 2016-08-02T07:20:13Z MEMBER

I can work on it. Should we go for a default outer join + broadcast inside concat? If the input already arrive aligned, or if the user wants a different type of join, this will slow things down with useless code though. What's your policy on this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  align() and broadcast() before concat() 168470276
236544958 https://github.com/pydata/xarray/issues/927#issuecomment-236544958 https://api.github.com/repos/pydata/xarray/issues/927 MDEyOklzc3VlQ29tbWVudDIzNjU0NDk1OA== crusaderky 6213168 2016-08-01T10:26:41Z 2016-08-01T10:40:27Z MEMBER

I'm now facing the equivalent problem with broadcast(). I need to concat 3 arrays:

python a = xarray.DataArray([[1,2],[3,4]], dims=['time', 'scenario'], coords={'time': ['t1', 't2']}) b = xarray.DataArray([5,6], dims=['time'], coords={'time': ['t3', 't4']}) c = xarray.DataArray(7, coords={'time': 't5'})

I want to broadcast dimension 'scenario' and concatenate on dimension 'time'. However, I can't invoke broadcast() because 1. a and b have misaligned time 2. c has a time coord, but not a time dim - if you broadcast a and c, you will get incorrect results

Again the solution would be to have an optional parameter exclude=['time']

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  align() and broadcast() before concat() 168470276
236541952 https://github.com/pydata/xarray/issues/927#issuecomment-236541952 https://api.github.com/repos/pydata/xarray/issues/927 MDEyOklzc3VlQ29tbWVudDIzNjU0MTk1Mg== crusaderky 6213168 2016-08-01T10:10:53Z 2016-08-01T10:10:53Z MEMBER

Just found xarray.core.alignment.partial_align(), which does exactly that. Not sure why the functionality is not part of the public API? Looks like the solution is to simply merge partial_align() into align()?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  align() and broadcast() before concat() 168470276

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 26.035ms · About: xarray-datasette