issue_comments
16 rows where user = 941907 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- smartass101 · 16 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
605468155 | https://github.com/pydata/xarray/issues/1040#issuecomment-605468155 | https://api.github.com/repos/pydata/xarray/issues/1040 | MDEyOklzc3VlQ29tbWVudDYwNTQ2ODE1NQ== | smartass101 941907 | 2020-03-28T16:12:38Z | 2020-03-28T16:12:38Z | NONE | These days I mostly use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray.diff dim argument should be optional as is in docstring 181340410 | |
567082163 | https://github.com/pydata/xarray/issues/3574#issuecomment-567082163 | https://api.github.com/repos/pydata/xarray/issues/3574 | MDEyOklzc3VlQ29tbWVudDU2NzA4MjE2Mw== | smartass101 941907 | 2019-12-18T15:32:38Z | 2019-12-18T15:32:38Z | NONE |
Yes, sorry, written this way I now see what you meant and that will likely work indeed. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910 | |
566938638 | https://github.com/pydata/xarray/issues/3574#issuecomment-566938638 | https://api.github.com/repos/pydata/xarray/issues/3574 | MDEyOklzc3VlQ29tbWVudDU2NjkzODYzOA== | smartass101 941907 | 2019-12-18T08:55:29Z | 2019-12-18T08:55:29Z | NONE |
I'm afraid that passing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910 | |
565186199 | https://github.com/pydata/xarray/issues/3574#issuecomment-565186199 | https://api.github.com/repos/pydata/xarray/issues/3574 | MDEyOklzc3VlQ29tbWVudDU2NTE4NjE5OQ== | smartass101 941907 | 2019-12-12T21:04:33Z | 2019-12-12T21:04:33Z | NONE |
Yes, now I recall that this was the issue, yeah. It doesn't even depend on your actual data really. Possible option 3. is to address https://github.com/dask/dask/issues/5642 directly (haven't found time to do a PR yet). Essentially from the code described in that issue I have the feeling that if a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910 | |
564934693 | https://github.com/pydata/xarray/issues/3574#issuecomment-564934693 | https://api.github.com/repos/pydata/xarray/issues/3574 | MDEyOklzc3VlQ29tbWVudDU2NDkzNDY5Mw== | smartass101 941907 | 2019-12-12T09:57:18Z | 2019-12-12T09:57:28Z | NONE | Sounds similar. But I'm not sure why you get the 0d issue when even your chunks don't (from a quick reading) seem to have a 0 size in any of the dimensions. Could you please show us what is the resulting chunk setup? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910 | |
558616375 | https://github.com/pydata/xarray/issues/3574#issuecomment-558616375 | https://api.github.com/repos/pydata/xarray/issues/3574 | MDEyOklzc3VlQ29tbWVudDU1ODYxNjM3NQ== | smartass101 941907 | 2019-11-26T12:56:47Z | 2019-11-26T12:56:47Z | NONE | Another approach would be to bypass Perhaps this is an oversight in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta 528701910 | |
430946620 | https://github.com/pydata/xarray/issues/1471#issuecomment-430946620 | https://api.github.com/repos/pydata/xarray/issues/1471 | MDEyOklzc3VlQ29tbWVudDQzMDk0NjYyMA== | smartass101 941907 | 2018-10-18T09:48:20Z | 2018-10-18T09:48:20Z | NONE | I indeed often resort to using a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
sharing dimensions across dataarrays in a dataset 241290234 | |
430324391 | https://github.com/pydata/xarray/issues/1471#issuecomment-430324391 | https://api.github.com/repos/pydata/xarray/issues/1471 | MDEyOklzc3VlQ29tbWVudDQzMDMyNDM5MQ== | smartass101 941907 | 2018-10-16T17:24:42Z | 2018-10-16T17:46:17Z | NONE | I've hit this design limitation quite often as well, with several use-cases, both in experiment and simulation. It detracts from xarray's power of conveniently and transparently handling coordinate meta-data. From the Why xarray? page:
Adding effectively dummy dimensions or coordinates is essentially what this alignment design is forcing us to do. A possible solution would be something like having (some) coordinate arrays in an (Unaligned)Dataset being a "reducible" (it would reduce to Index for each Datarray) MultiIndex. A workaround can be using MultiIndex coordinates directly, but then alignment cannot be done easily as levels do not behave as real dimensions. Use-cases examples:1. coordinate "metadata"I often have measurements on related axes, but also with additional coordinates (different positions, etc.) Consider:
What I would like to get (pseudocode):
While it is possible to 2. unaligned time domainsThis s a large problem especially when different time-bases are involved. A difference in sampling intervals will blow up the storage by a huge number of nan values. Which of course greatly complicates further calculations, e.g. filtering in the time domain. Or just non-overlaping time intervals will require at least double the storage area. I often find myself resorting rather to |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
sharing dimensions across dataarrays in a dataset 241290234 | |
261496603 | https://github.com/pydata/xarray/issues/1130#issuecomment-261496603 | https://api.github.com/repos/pydata/xarray/issues/1130 | MDEyOklzc3VlQ29tbWVudDI2MTQ5NjYwMw== | smartass101 941907 | 2016-11-18T10:16:17Z | 2016-11-18T10:16:17Z | NONE |
It would be just one extra call to a funciton which is very simple. As I commented in #1074, I think it makes more sense to have
I think that could be quite likely as one might want to apply a DataArray-compatible function. This would force users to remember which type of "function applier" to use for a given function and might be confusing. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pipe, apply should call maybe_wrap_array, possibly resolve dim->axis 189998469 | |
261495282 | https://github.com/pydata/xarray/issues/1074#issuecomment-261495282 | https://api.github.com/repos/pydata/xarray/issues/1074 | MDEyOklzc3VlQ29tbWVudDI2MTQ5NTI4Mg== | smartass101 941907 | 2016-11-18T10:09:48Z | 2016-11-18T10:09:48Z | NONE | Actually, I think that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray.apply is missing 186868181 | |
261223890 | https://github.com/pydata/xarray/issues/1074#issuecomment-261223890 | https://api.github.com/repos/pydata/xarray/issues/1074 | MDEyOklzc3VlQ29tbWVudDI2MTIyMzg5MA== | smartass101 941907 | 2016-11-17T11:29:38Z | 2016-11-17T11:29:38Z | NONE | I think #1130 is related. I also think that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
DataArray.apply is missing 186868181 | |
261200107 | https://github.com/pydata/xarray/issues/1080#issuecomment-261200107 | https://api.github.com/repos/pydata/xarray/issues/1080 | MDEyOklzc3VlQ29tbWVudDI2MTIwMDEwNw== | smartass101 941907 | 2016-11-17T09:41:18Z | 2016-11-17T09:41:18Z | NONE | Thank you for continuing this discussion even though you didn't agree with the initial proposal. I have accepted and embraced option 3) as it is indeed about the cleanest and most readable option. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
acccessor extending approach limits functional programming approach, make direct monkey-patching also possible 187373423 | |
260116620 | https://github.com/pydata/xarray/issues/1080#issuecomment-260116620 | https://api.github.com/repos/pydata/xarray/issues/1080 | MDEyOklzc3VlQ29tbWVudDI2MDExNjYyMA== | smartass101 941907 | 2016-11-12T11:28:02Z | 2016-11-12T11:28:02Z | NONE |
Good point, in that case explicit namespacing indeed helps.
A module-level namespace has nothing to do with the class namespace, but I see you try to tie them, which makes sense in relationship with the argument about reading code in text form. However, that may not be clear for Python programmers as those namespaces are not tied in reality, better mention it in the docs. BTW, if you are enforcing some specific style guide, please note that in the docs. And I hope you strike the right balance between style complacency and universality.
My problem with non-functional paradigms lies more in the
That is indeed a good alternative, just not sure my colleagues will like the transition from |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
acccessor extending approach limits functional programming approach, make direct monkey-patching also possible 187373423 | |
258702758 | https://github.com/pydata/xarray/issues/1080#issuecomment-258702758 | https://api.github.com/repos/pydata/xarray/issues/1080 | MDEyOklzc3VlQ29tbWVudDI1ODcwMjc1OA== | smartass101 941907 | 2016-11-06T19:09:43Z | 2016-11-06T19:09:43Z | NONE | The namespace argument doesn't seem very convincing since you already implement many methods which may shadow variables (mean, diff). By limiting control of the namespace you make some uses somewhat inconvenient. If you want users to use DataArray as a general and universal and also extensible container, limiting its namespace goes against that. If they shadow vars by their methods, that's their decision to make. While it may seem cleaner to have a stricter API, in real use cases users care more about convenient code access than where it came from. And when they look at the method object it will clearly tell them where it was defined. Python's introspection capabilities are powerful enough that users can find out such information. What I meant by the 2. point was that in many cases one just needs a simple method and with the accessor approach one has to write extra lines of code like the ones you suggested earlier that may later seem cryptic. Caching of the accessor can be indeed useful, just not always. If you want people to develop plugins, make it as simple as possible and yet also advanced for those who require it. And then there"s also the problem of accessors not being usable in functional programming paradigms. Tl;dr: accessors have benefits (namespace containment, caching) but also limitations (not functional paradigm, overkill sometimes). Give users more control over methods and you'll get more plugins. On November 6, 2016 2:22:44 PM GMT+01:00, Stephan Hoyer notifications@github.com wrote:
Sent from my Android device with K-9 Mail. Please excuse my brevity. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
acccessor extending approach limits functional programming approach, make direct monkey-patching also possible 187373423 | |
258690368 | https://github.com/pydata/xarray/issues/1082#issuecomment-258690368 | https://api.github.com/repos/pydata/xarray/issues/1082 | MDEyOklzc3VlQ29tbWVudDI1ODY5MDM2OA== | smartass101 941907 | 2016-11-06T16:02:36Z | 2016-11-06T16:02:36Z | NONE | I vote for warning by default. Raising an error brings more inconvenience than it's worth. Remember to warneach time, not just on first code run. On November 6, 2016 2:11:54 PM GMT+01:00, Stephan Hoyer notifications@github.com wrote:
Sent from my Android device with K-9 Mail. Please excuse my brevity. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Issue a warning when overwriting attributes with accessors instead of erroring 187560717 | |
258623314 | https://github.com/pydata/xarray/issues/1080#issuecomment-258623314 | https://api.github.com/repos/pydata/xarray/issues/1080 | MDEyOklzc3VlQ29tbWVudDI1ODYyMzMxNA== | smartass101 941907 | 2016-11-05T16:41:07Z | 2016-11-05T16:41:07Z | NONE | Thank you for your response. I still don't understand why you are pushing accessors in place of methods to such an extent. Is it because of namespace growth/conflicts? There are already many methods like While the solutions you presented are usable, they seem like workarounds and somewhat redundant or add extra like overhead (in terms of writing code). Registering extra dataset accessors where DataArray method application would do seems again redundant.
Could you please give some clear arguments why you discourage the use of normal methods? The two arguments listed in the docs don't really make a compelling case against method monkey-patching, because 1. name clashes can be easily checked for either approach (in either case you just check the existence of a class attribute) 2. caching on the dataset sometimes makes no sense and just adds redundancy and complicates the design and registering of extra functionality I'm not trying to say that the accessor approach is wrong, I'm sure it makes sense for certain plugins. I'm just trying to share my experience with a very similar case where the simpler method approach turned out to be satisfactory and I think enabling it would increase the chances of more xarray plugins (which may not need accessor logic) coming to life. Btw, perhaps it might be better to (perhaps optionally) issue a warning when overriding an existing class attribute during registering instead of completely refusing to do so. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
acccessor extending approach limits functional programming approach, make direct monkey-patching also possible 187373423 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 7