issue_comments
15 rows where user = 1310437 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- burnpanck · 15 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
261551610 | https://github.com/pydata/xarray/pull/1118#issuecomment-261551610 | https://api.github.com/repos/pydata/xarray/issues/1118 | MDEyOklzc3VlQ29tbWVudDI2MTU1MTYxMA== | burnpanck 1310437 | 2016-11-18T14:58:24Z | 2016-11-18T14:58:24Z | CONTRIBUTOR | With the new changes, this will now conflict with #1128, though easy to solve. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Do not convert subclasses of `ndarray` unless required 189095110 | |
260916046 | https://github.com/pydata/xarray/pull/1118#issuecomment-260916046 | https://api.github.com/repos/pydata/xarray/issues/1118 | MDEyOklzc3VlQ29tbWVudDI2MDkxNjA0Ng== | burnpanck 1310437 | 2016-11-16T10:55:00Z | 2016-11-16T10:55:00Z | CONTRIBUTOR | Travis succeeds, though lots of failures under environments with allowed failure. They look unrelated to me, but I find it hard to tell. Appveyor doesn't seem to run the quantities tests, so I guess the requirements there are missing too. Where would I add requirements for Appveyor? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Do not convert subclasses of `ndarray` unless required 189095110 | |
260710056 | https://github.com/pydata/xarray/pull/1118#issuecomment-260710056 | https://api.github.com/repos/pydata/xarray/issues/1118 | MDEyOklzc3VlQ29tbWVudDI2MDcxMDA1Ng== | burnpanck 1310437 | 2016-11-15T17:34:15Z | 2016-11-15T17:34:15Z | CONTRIBUTOR | You are right. There seem to be quite a number of varying |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Do not convert subclasses of `ndarray` unless required 189095110 | |
260706103 | https://github.com/pydata/xarray/pull/1122#issuecomment-260706103 | https://api.github.com/repos/pydata/xarray/issues/1122 | MDEyOklzc3VlQ29tbWVudDI2MDcwNjEwMw== | burnpanck 1310437 | 2016-11-15T17:20:30Z | 2016-11-15T17:20:30Z | CONTRIBUTOR | Unfortunately, I was unable to come up with a good regression test. Interactive testing confirms that the fix is working (no iteration is performed, and the runtime of the example given in #1121 went down from ~1s to 0.3 us). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix slow object arrays indexing 189451582 | |
260704684 | https://github.com/pydata/xarray/issues/1121#issuecomment-260704684 | https://api.github.com/repos/pydata/xarray/issues/1121 | MDEyOklzc3VlQ29tbWVudDI2MDcwNDY4NA== | burnpanck 1310437 | 2016-11-15T17:15:25Z | 2016-11-15T17:15:25Z | CONTRIBUTOR | I think I found it (#1122). I guess whenever a non-scalar assignment is made (as in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Performance degradation: `DataArray` with `dtype=object` of `DataArray` gets very slow indexing 189415576 | |
260697960 | https://github.com/pydata/xarray/issues/1121#issuecomment-260697960 | https://api.github.com/repos/pydata/xarray/issues/1121 | MDEyOklzc3VlQ29tbWVudDI2MDY5Nzk2MA== | burnpanck 1310437 | 2016-11-15T16:52:41Z | 2016-11-15T16:52:41Z | CONTRIBUTOR | Well, xarrays are way too useful not to nest them, even if that involves the scary |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Performance degradation: `DataArray` with `dtype=object` of `DataArray` gets very slow indexing 189415576 | |
260339257 | https://github.com/pydata/xarray/pull/1119#issuecomment-260339257 | https://api.github.com/repos/pydata/xarray/issues/1119 | MDEyOklzc3VlQ29tbWVudDI2MDMzOTI1Nw== | burnpanck 1310437 | 2016-11-14T13:52:16Z | 2016-11-14T14:51:26Z | CONTRIBUTOR | This fix handles the case |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Fix #1116 (rename of coordinates) 189099082 | |
258425477 | https://github.com/pydata/xarray/issues/1074#issuecomment-258425477 | https://api.github.com/repos/pydata/xarray/issues/1074 | MDEyOklzc3VlQ29tbWVudDI1ODQyNTQ3Nw== | burnpanck 1310437 | 2016-11-04T13:04:37Z | 2016-11-04T13:04:37Z | CONTRIBUTOR | As for the consistency concern, I wouldn't have expected that to be a big issue. I'd argue that most functions mapping |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray.apply is missing 186868181 | |
258423515 | https://github.com/pydata/xarray/issues/1074#issuecomment-258423515 | https://api.github.com/repos/pydata/xarray/issues/1074 | MDEyOklzc3VlQ29tbWVudDI1ODQyMzUxNQ== | burnpanck 1310437 | 2016-11-04T12:55:15Z | 2016-11-04T12:55:15Z | CONTRIBUTOR | Aha! For my use-case, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray.apply is missing 186868181 | |
256206191 | https://github.com/pydata/xarray/issues/475#issuecomment-256206191 | https://api.github.com/repos/pydata/xarray/issues/475 | MDEyOklzc3VlQ29tbWVudDI1NjIwNjE5MQ== | burnpanck 1310437 | 2016-10-25T23:13:37Z | 2016-10-25T23:17:40Z | CONTRIBUTOR | Really? I get a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
API design for pointwise indexing 95114700 | |
256199958 | https://github.com/pydata/xarray/issues/475#issuecomment-256199958 | https://api.github.com/repos/pydata/xarray/issues/475 | MDEyOklzc3VlQ29tbWVudDI1NjE5OTk1OA== | burnpanck 1310437 | 2016-10-25T22:44:30Z | 2016-10-25T22:44:30Z | CONTRIBUTOR | Without following the discussion in detail, what is the status here? In particular, I would like to do pointwise selection on multiple 1D coordinates using multidimensional indexer arrays. I can do this with the current Given this conceptually easy but somewhat tedious procedure, couldn't that be something that could quite easily be implemented into the current |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
API design for pointwise indexing 95114700 | |
248255299 | https://github.com/pydata/xarray/issues/525#issuecomment-248255299 | https://api.github.com/repos/pydata/xarray/issues/525 | MDEyOklzc3VlQ29tbWVudDI0ODI1NTI5OQ== | burnpanck 1310437 | 2016-09-20T09:49:23Z | 2016-09-20T09:51:30Z | CONTRIBUTOR | Or another way to put it: While typical metadata/attributes are only relevant if you eventually read them (which is where you will notice if they were lost on the way), units are different: They work silently behind the scene at all times, even if you do not explicitly look for them. You want an addition to fail if units don't match, without having to explicitly first test if the operands have units. So what should the ufunc_hook do if it finds two Variables that don't seem to carry units, raise an exception? Most probably not, as that would prevent to use xarray at the same time without units. So if the units are lost on the way, you might never notice, but end up with wrong data. To me, that is just not unlikely enough to happen given the damage it can do (e.g. the time it takes to find out what's going on once you realise you get wrong data). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support for units 100295585 | |
248255426 | https://github.com/pydata/xarray/issues/525#issuecomment-248255426 | https://api.github.com/repos/pydata/xarray/issues/525 | MDEyOklzc3VlQ29tbWVudDI0ODI1NTQyNg== | burnpanck 1310437 | 2016-09-20T09:50:00Z | 2016-09-20T09:50:00Z | CONTRIBUTOR | So for now, I'm hunting for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support for units 100295585 | |
248252494 | https://github.com/pydata/xarray/issues/525#issuecomment-248252494 | https://api.github.com/repos/pydata/xarray/issues/525 | MDEyOklzc3VlQ29tbWVudDI0ODI1MjQ5NA== | burnpanck 1310437 | 2016-09-20T09:36:24Z | 2016-09-20T09:36:24Z | CONTRIBUTOR | 988 would certainly allow to me to implement unit functionality on xarray, probably by leveraging an existing units package.What I don't like with that approach is the fact that I essentially end up with a separate distinct implementation of units. I am afraid that I will either have to re-implement many of the helpers that I wrote to work with physical quantities to be xarray aware. Furthermore, one important aspect of units packages is that it prevents you from doing conversion mistakes. But that only works as long as you don't forget to carry the units with you. Having units just as attributes to xarray makes it as simple as forgetting to read the attributes when accessing the data to lose the units.
The units inside xarray approach would have the advantage that whenever you end up accessing the data inside xarray, you automatically have the units with you.
From a conceptual point of view, the units are really an integral part of the data, so they should sit right there with the data. Whenever you do something with the data, you have to deal with the units. That is true no matter if it is implemented as an attribute handler or directly on the data array. My fear is, attributes leave the impression of "optional" metadata which are too easily lost. E.g. xarray doesn't call it's ufunc_hook for some operation where it should, and you silently lose units. My hope is that with nested arrays that carry units, you would instead fail verbosely. Of course, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support for units 100295585 | |
248059952 | https://github.com/pydata/xarray/issues/525#issuecomment-248059952 | https://api.github.com/repos/pydata/xarray/issues/525 | MDEyOklzc3VlQ29tbWVudDI0ODA1OTk1Mg== | burnpanck 1310437 | 2016-09-19T17:24:21Z | 2016-09-19T17:24:21Z | CONTRIBUTOR | +1 for units support. I agree, parametrised dtypes would be the preferred solution, but I don't want to wait that long (I would be willing to contribute to that end, but I'm afraid that would exceed my knowledge of numpy). I have never used dask. I understand that the support for dask arrays is a central feature for xarray. However, the way I see it, if one would put a (unit-aware) ndarray subclass into an xarray, then units should work out of the box. As you discussed, this seems not so easy to make work together with dask (particularly in a generic way). However, shouldn't that be an issue that the dask community anyway has to solve (i.e.: currently there is no way to use any units package together with dask, right)? In that sense, allowing such arrays inside xarrays would force users to choose between dask and units, which is something they have to do anyway. But for a big part of users, that would be a very quick way to units! Or am I missing something here? I'll just try to monkeypatch xarray to that end, and see how far I get... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support for units 100295585 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 7