issue_comments
8 rows where issue = 272004812 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- apply_ufunc(dask='parallelized') output_dtypes for datasets · 8 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 609866371 | https://github.com/pydata/xarray/issues/1699#issuecomment-609866371 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDYwOTg2NjM3MQ== | jhamman 2443309 | 2020-04-06T15:31:16Z | 2020-04-06T15:31:16Z | MEMBER | also pinging @dcherian who has been working on a similar problem set with |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 609650053 | https://github.com/pydata/xarray/issues/1699#issuecomment-609650053 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDYwOTY1MDA1Mw== | crusaderky 6213168 | 2020-04-06T08:26:13Z | 2020-04-06T08:26:13Z | MEMBER | still relevant |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 609098334 | https://github.com/pydata/xarray/issues/1699#issuecomment-609098334 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDYwOTA5ODMzNA== | stale[bot] 26384082 | 2020-04-04T22:37:22Z | 2020-04-04T22:37:22Z | NONE | In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here or remove the |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 386835630 | https://github.com/pydata/xarray/issues/1699#issuecomment-386835630 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDM4NjgzNTYzMA== | shoyer 1217238 | 2018-05-05T21:17:20Z | 2018-05-05T21:17:20Z | MEMBER |
I agree with the concern about duck typing, but my concern with Another option would be accept either objects with a dtype or dtypes in |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 386721846 | https://github.com/pydata/xarray/issues/1699#issuecomment-386721846 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDM4NjcyMTg0Ng== | crusaderky 6213168 | 2018-05-04T20:18:48Z | 2018-05-04T20:21:10Z | MEMBER | The key thing is that for most people it would be extremely elegant and practical to be able to duck-type wrappers around numpy, scipy, and numba kernels that automagically work with Variable, DataArray, and Dataset (see my example above).
You'll agree on how ugly my 1-liner above would become:
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 384134364 | https://github.com/pydata/xarray/issues/1699#issuecomment-384134364 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDM4NDEzNDM2NA== | shoyer 1217238 | 2018-04-25T01:42:36Z | 2018-04-25T01:42:36Z | MEMBER | I'm not sure about adding Anyways, I agree that |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 384073275 | https://github.com/pydata/xarray/issues/1699#issuecomment-384073275 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDM4NDA3MzI3NQ== | crusaderky 6213168 | 2018-04-24T20:42:57Z | 2018-04-24T20:42:57Z | MEMBER | @shoyer that seems counter-intuitive for me - you are returning two datasets after all.
If we go with the |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 | |
| 342670406 | https://github.com/pydata/xarray/issues/1699#issuecomment-342670406 | https://api.github.com/repos/pydata/xarray/issues/1699 | MDEyOklzc3VlQ29tbWVudDM0MjY3MDQwNg== | shoyer 1217238 | 2017-11-08T00:32:45Z | 2017-11-08T00:32:45Z | MEMBER | Yes, I like this. Though it's worth considering whether the syntax should reverse the list/dict nesting, e.g., |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
apply_ufunc(dask='parallelized') output_dtypes for datasets 272004812 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 4