issue_comments
13 rows where author_association = "CONTRIBUTOR", issue = 833778859 and user = 17001470 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Allow assigning values to a subset of a dataset · 13 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 847695616 | https://github.com/pydata/xarray/pull/5045#issuecomment-847695616 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDg0NzY5NTYxNg== | matzegoebel 17001470 | 2021-05-25T09:08:59Z | 2021-05-25T09:08:59Z | CONTRIBUTOR | I guess we should also add this feature to the documentation, right? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 847694996 | https://github.com/pydata/xarray/pull/5045#issuecomment-847694996 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDg0NzY5NDk5Ng== | matzegoebel 17001470 | 2021-05-25T09:08:14Z | 2021-05-25T09:08:14Z | CONTRIBUTOR | Thanks for your help @max-sixty and @shoyer! |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 832950273 | https://github.com/pydata/xarray/pull/5045#issuecomment-832950273 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMjk1MDI3Mw== | matzegoebel 17001470 | 2021-05-05T19:27:00Z | 2021-05-05T19:27:10Z | CONTRIBUTOR |
Ok I resolved them |
{
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 832887625 | https://github.com/pydata/xarray/pull/5045#issuecomment-832887625 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMjg4NzYyNQ== | matzegoebel 17001470 | 2021-05-05T17:49:35Z | 2021-05-05T17:49:35Z | CONTRIBUTOR | ok done |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 832854453 | https://github.com/pydata/xarray/pull/5045#issuecomment-832854453 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMjg1NDQ1Mw== | matzegoebel 17001470 | 2021-05-05T16:57:06Z | 2021-05-05T16:57:06Z | CONTRIBUTOR |
I'm not sure how to use this, because |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 832838964 | https://github.com/pydata/xarray/pull/5045#issuecomment-832838964 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMjgzODk2NA== | matzegoebel 17001470 | 2021-05-05T16:37:23Z | 2021-05-05T16:37:36Z | CONTRIBUTOR |
I think I somehow forgot the join="exact" when testing the functionality of xr.align. So nevermind, I'll reimplement again. :P
Ok good point. I'll give it a try. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 832658072 | https://github.com/pydata/xarray/pull/5045#issuecomment-832658072 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMjY1ODA3Mg== | matzegoebel 17001470 | 2021-05-05T12:45:53Z | 2021-05-05T12:45:53Z | CONTRIBUTOR | I revised the pre-assignment checks. In my opinion xr.align is not so helpful when checking that the dimension sizes and coordinates are consistent, because it doesn't fail when the dimension size of the two Datasets is different, but the coordinate of the second Dataset is a subset of the first one. Therefore, I reimplemented the check that I had previously in a similar way. I also added a check for the wrong order of the dimensions, that you mentioned @shoyer. If, despite the checks, an error occurs during the assignment, e.g. due to a type error, and the dataset has been updated already partially, the user is informed about this. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 831737517 | https://github.com/pydata/xarray/pull/5045#issuecomment-831737517 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMTczNzUxNw== | matzegoebel 17001470 | 2021-05-04T07:26:35Z | 2021-05-04T07:26:35Z | CONTRIBUTOR | @shoyer thanks for your suggestions! I included them as best as I could. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 831100015 | https://github.com/pydata/xarray/pull/5045#issuecomment-831100015 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgzMTEwMDAxNQ== | matzegoebel 17001470 | 2021-05-03T08:11:39Z | 2021-05-03T08:11:39Z | CONTRIBUTOR | ok, I deleted the copy stuff and included a few checks to catch possible errors before setting the values. Did I miss anything? How do we check for "type errors that don't coerce", as you mentioned? The setitem method of the LocIndexer now calls the setitem method of the Dataset class, so that we don't have redundant code. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 829864953 | https://github.com/pydata/xarray/pull/5045#issuecomment-829864953 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgyOTg2NDk1Mw== | matzegoebel 17001470 | 2021-04-30T06:10:56Z | 2021-04-30T06:10:56Z | CONTRIBUTOR | Calling getitem is not enough to detect all possible errors, I guess. Another possibility would be to do a deep copy before the assignments, and if anything goes wrong, restore the original data from the copy. In this way, the assignments do not have to be done twice unless an error appears. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 822332988 | https://github.com/pydata/xarray/pull/5045#issuecomment-822332988 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgyMjMzMjk4OA== | matzegoebel 17001470 | 2021-04-19T09:46:29Z | 2021-04-19T09:46:29Z | CONTRIBUTOR | I don't understand what's the issue of the failing test. Do you? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 822263369 | https://github.com/pydata/xarray/pull/5045#issuecomment-822263369 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgyMjI2MzM2OQ== | matzegoebel 17001470 | 2021-04-19T08:04:49Z | 2021-04-19T08:04:49Z | CONTRIBUTOR | ok I tried to include your suggestions. Concerning @shoyer's point 3: Since I guess there could be a lot of different errors appearing, I created a copy of the data to be changed to check if the setitem fails before doing the actual update. That's of course suboptimal concerning the performance. What do you think, should we include checks for all conceivable errors or keep this test update? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 | |
| 801083363 | https://github.com/pydata/xarray/pull/5045#issuecomment-801083363 | https://api.github.com/repos/pydata/xarray/issues/5045 | MDEyOklzc3VlQ29tbWVudDgwMTA4MzM2Mw== | matzegoebel 17001470 | 2021-03-17T13:32:46Z | 2021-03-17T13:32:46Z | CONTRIBUTOR | I haven't updated the documentation yet, where it says that this feature is not supported yet. Do you think we need example code for this feature in the documentation? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Allow assigning values to a subset of a dataset 833778859 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1