home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

11 rows where issue = 120038291 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

These facets timed out: author_association, issue

user 1

  • shoyer · 11 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
186492825 https://github.com/pydata/xarray/pull/668#issuecomment-186492825 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE4NjQ5MjgyNQ== shoyer 1217238 2016-02-20T02:37:52Z 2016-02-20T02:37:52Z MEMBER

Woot!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
186488866 https://github.com/pydata/xarray/pull/668#issuecomment-186488866 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE4NjQ4ODg2Ng== shoyer 1217238 2016-02-20T02:26:58Z 2016-02-20T02:26:58Z MEMBER

OK, this looks good to me. Merge when you're ready!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
185817586 https://github.com/pydata/xarray/pull/668#issuecomment-185817586 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE4NTgxNzU4Ng== shoyer 1217238 2016-02-18T17:07:11Z 2016-02-18T17:07:11Z MEMBER

I'd also still love to see an explicit example where our behavior differs from pandas (in the last position if center=True) so we can try to figure out what's going on. This might actually be a bug on the pandas side :).

Generally this PR is looking very close. We could differ some of the API design work by keeping empty_like private for now, but I'm also happy to hash this out here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
185816575 https://github.com/pydata/xarray/pull/668#issuecomment-185816575 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE4NTgxNjU3NQ== shoyer 1217238 2016-02-18T17:03:37Z 2016-02-18T17:03:37Z MEMBER

What is the full set of functions like empty_like that we would want for the public API? zeros_like, ones_like, full_like, maybe missing_like?

empty_like (without a fill value) doesn't make a lot of sense for dask.array, but all the rest of these do, and it would be nice to have the xarray functions work by copying dask arrays by making new dask arrays.

One possibility, instead of making a separate missing_like, is to make xr.empty_like always fill with NaN. Users can always drop into numpy directly if they really want to make an array and not set the values -- this very rarely makes an actual difference for performance since memory allocation is so slow.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
167563505 https://github.com/pydata/xarray/pull/668#issuecomment-167563505 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2NzU2MzUwNQ== shoyer 1217238 2015-12-28T12:49:03Z 2015-12-28T12:49:03Z MEMBER

@jhamman how are we doing here? Are you waiting on a review from me?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
162031347 https://github.com/pydata/xarray/pull/668#issuecomment-162031347 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2MjAzMTM0Nw== shoyer 1217238 2015-12-04T17:40:47Z 2015-12-04T17:40:47Z MEMBER

How did pandas land on this. To me it makes more sense as an argument to init but I'll go with whatever pandas decided for consistency.

Still unresolved, though Jeff Reback agrees with you. It's being discussed in the rolling PR currently.

Also: what about changing the default min_count to 0? I think that would be more consistent with pandas, which skips over missing values by default.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
162030626 https://github.com/pydata/xarray/pull/668#issuecomment-162030626 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2MjAzMDYyNg== shoyer 1217238 2015-12-04T17:37:26Z 2015-12-04T17:37:26Z MEMBER

I wanted consistency between reduce, _bottleneck_reduce and iter.

Agreed, this would be nice. But if min_count=0, this won't be the case, because you will average over partial windows at the start of the rolling iteration. For example, you apply the aggregation function to windows of size [1, 2, 3, 3, 3, 3]. And the labels are also not consistent.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
161808000 https://github.com/pydata/xarray/pull/668#issuecomment-161808000 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2MTgwODAwMA== shoyer 1217238 2015-12-03T22:29:43Z 2015-12-03T22:29:43Z MEMBER

@shoyer - would you mind taking a look at what I've just tried (and failed) in ops.py and common.py? I think I'm missing a big piece of the injection puzzle.

I'll give this a test, but it looks like you have all the pieces to me....

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
161804311 https://github.com/pydata/xarray/pull/668#issuecomment-161804311 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2MTgwNDMxMQ== shoyer 1217238 2015-12-03T22:22:00Z 2015-12-03T22:22:00Z MEMBER

For iteration, what about only iterating over full windows? Thinking about how I might use iteration, I think this might be more useful than returning some shrunk windows.

Concretely, this means that if you iterate over xray.DataArray(range(6), dims='x').rolling(x=3), results would have labels from 1 through 5.

I think you've done a pretty reasonable job of interpreting min_periods for iteration, but I would still vote for defining it only as an argument to the aggregation methods and not worrying about it for iteration. It keeps things a bit simpler and easier to understand. OTOH, if you can think of use cases for min_periods with iteration, I could be convinced :).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
161791877 https://github.com/pydata/xarray/pull/668#issuecomment-161791877 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2MTc5MTg3Nw== shoyer 1217238 2015-12-03T21:36:50Z 2015-12-03T21:36:50Z MEMBER

Internet at work today is only working 20% of the time. I'm happy to take a look once things get back online :).

On Thu, Dec 3, 2015 at 1:05 PM, Joe Hamman notifications@github.com wrote:

@shoyer - would you mind taking a look at what I've just tried (and failed) in ops.py and common.py? I think I'm missing a big piece of the injection puzzle.

Reply to this email directly or view it on GitHub: https://github.com/xray/xray/pull/668#issuecomment-161784342

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291
161782889 https://github.com/pydata/xarray/pull/668#issuecomment-161782889 https://api.github.com/repos/pydata/xarray/issues/668 MDEyOklzc3VlQ29tbWVudDE2MTc4Mjg4OQ== shoyer 1217238 2015-12-03T20:59:28Z 2015-12-03T20:59:28Z MEMBER

How do you suggest we handle the bottleneck dependency? That is the reason for the failing tests at the moment.

You can either add a try/except around a top level import of bottleneck, or only import bottleneck locally inside functions which need it. I think I would prefer the later approach because it results in more intelligible error messages (ImportError: No module named bottleneck rather than NameError: name 'bottleneck' is not defined).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature/rolling 120038291

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1470.01ms · About: xarray-datasette