{"database": "github", "table": "issue_comments", "is_view": false, "human_description_en": "where issue = 833778859 sorted by updated_at descending", "rows": [["https://github.com/pydata/xarray/pull/5045#issuecomment-847906655", "https://api.github.com/repos/pydata/xarray/issues/5045", 847906655, "MDEyOklzc3VlQ29tbWVudDg0NzkwNjY1NQ==", 5635139, "2021-05-25T14:14:23Z", "2021-05-25T14:14:23Z", "MEMBER", "Docs would be great! Particularly if the current docs are out of date now. Thanks @matzegoebel ", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-847695616", "https://api.github.com/repos/pydata/xarray/issues/5045", 847695616, "MDEyOklzc3VlQ29tbWVudDg0NzY5NTYxNg==", 17001470, "2021-05-25T09:08:59Z", "2021-05-25T09:08:59Z", "CONTRIBUTOR", "I guess we should also add this feature to the documentation, right?", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-847694996", "https://api.github.com/repos/pydata/xarray/issues/5045", 847694996, "MDEyOklzc3VlQ29tbWVudDg0NzY5NDk5Ng==", 17001470, "2021-05-25T09:08:14Z", "2021-05-25T09:08:14Z", "CONTRIBUTOR", "Thanks for your help @max-sixty and @shoyer!", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-847654999", "https://api.github.com/repos/pydata/xarray/issues/5045", 847654999, "MDEyOklzc3VlQ29tbWVudDg0NzY1NDk5OQ==", 5635139, "2021-05-25T08:13:08Z", "2021-05-25T08:13:08Z", "MEMBER", "Thanks a lot @matzegoebel !", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-840731561", "https://api.github.com/repos/pydata/xarray/issues/5045", 840731561, "MDEyOklzc3VlQ29tbWVudDg0MDczMTU2MQ==", 5635139, "2021-05-13T18:01:58Z", "2021-05-13T18:01:58Z", "MEMBER", "> I've fixed the location of the whats-new note. Is this ready to go in?\r\n\r\nYes. I have one typing question but we can merge regardless if needed", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-840716040", "https://api.github.com/repos/pydata/xarray/issues/5045", 840716040, "MDEyOklzc3VlQ29tbWVudDg0MDcxNjA0MA==", 2448579, "2021-05-13T17:35:46Z", "2021-05-13T17:35:46Z", "MEMBER", "I've fixed the location of the whats-new note. Is this ready to go in?", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832950273", "https://api.github.com/repos/pydata/xarray/issues/5045", 832950273, "MDEyOklzc3VlQ29tbWVudDgzMjk1MDI3Mw==", 17001470, "2021-05-05T19:27:00Z", "2021-05-05T19:27:10Z", "CONTRIBUTOR", "> could you also resolve the merge conflicts? It seems we can't \"approve and run\" the CI with conflicts.\r\n\r\nOk I resolved them", "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832902960", "https://api.github.com/repos/pydata/xarray/issues/5045", 832902960, "MDEyOklzc3VlQ29tbWVudDgzMjkwMjk2MA==", 14808389, "2021-05-05T18:13:22Z", "2021-05-05T18:13:22Z", "MEMBER", "could you also resolve the merge conflicts? It seems we can't \"approve and run\" the CI with conflicts.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832887625", "https://api.github.com/repos/pydata/xarray/issues/5045", 832887625, "MDEyOklzc3VlQ29tbWVudDgzMjg4NzYyNQ==", 17001470, "2021-05-05T17:49:35Z", "2021-05-05T17:49:35Z", "CONTRIBUTOR", "ok done\r\n", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832871434", "https://api.github.com/repos/pydata/xarray/issues/5045", 832871434, "MDEyOklzc3VlQ29tbWVudDgzMjg3MTQzNA==", 1217238, "2021-05-05T17:23:27Z", "2021-05-05T17:23:27Z", "MEMBER", "> I'm not sure how to use this, because `np.can_cast` with unsafe casting mostly returns True, e.g. `np.can_cast(str, float, casting=\"unsafe\")==True`. The error that is implemented in the tests would not be caught then.\r\n\r\nOh, good point, thanks for checking.\r\n\r\n> But I could explicitly try to cast the new values to the datatype of the original values with `.astype`. Would that be an option?\r\n\r\nYes, I like this idea!", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832854453", "https://api.github.com/repos/pydata/xarray/issues/5045", 832854453, "MDEyOklzc3VlQ29tbWVudDgzMjg1NDQ1Mw==", 17001470, "2021-05-05T16:57:06Z", "2021-05-05T16:57:06Z", "CONTRIBUTOR", "> `np.can_cast` with `casting='unsafe` can check this. It sounds like this would probably be something good to add to our checks :)\r\n\r\nI'm not sure how to use this, because `np.can_cast` with unsafe casting mostly returns True, e.g. `np.can_cast(str, float, casting=\"unsafe\")==True`. The error that is implemented in the tests would not be caught then. But I could explicitly try to cast the new values to the datatype of the original values with `.astype`. Would that be an option?\r\n", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832838964", "https://api.github.com/repos/pydata/xarray/issues/5045", 832838964, "MDEyOklzc3VlQ29tbWVudDgzMjgzODk2NA==", 17001470, "2021-05-05T16:37:23Z", "2021-05-05T16:37:36Z", "CONTRIBUTOR", "> Could you kindly elaborate on this issue, maybe with a specific example?\r\n\r\nI think I somehow forgot the join=\"exact\" when testing the functionality of xr.align. So nevermind, I'll reimplement again. :P\r\n\r\n> `np.can_cast` with `casting='unsafe` can check this. It sounds like this would probably be something good to add to our checks :)\r\n\r\nOk good point. I'll give it a try.\r\n", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832821013", "https://api.github.com/repos/pydata/xarray/issues/5045", 832821013, "MDEyOklzc3VlQ29tbWVudDgzMjgyMTAxMw==", 1217238, "2021-05-05T16:11:37Z", "2021-05-05T16:11:37Z", "MEMBER", "> I revised the pre-assignment checks. In my opinion xr.align is not so helpful when checking that the dimension sizes and coordinates are consistent, because it doesn't fail when the dimension size of the two Datasets is different, but the coordinate of the second Dataset is a subset of the first one.\r\n\r\nCould you kindly elaborate on this issue, maybe with a specific example?\r\n\r\n> If, despite the checks, an error occurs during the assignment, e.g. due to a type error, and the dataset has been updated already partially, the user is informed about this.\r\n\r\n`np.can_cast` with `casting='unsafe` can check this. It sounds like this would probably be something good to add to our checks :)", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-832658072", "https://api.github.com/repos/pydata/xarray/issues/5045", 832658072, "MDEyOklzc3VlQ29tbWVudDgzMjY1ODA3Mg==", 17001470, "2021-05-05T12:45:53Z", "2021-05-05T12:45:53Z", "CONTRIBUTOR", "I revised the pre-assignment checks. In my opinion xr.align is not so helpful when checking that the dimension sizes and coordinates are consistent, because it doesn't fail when the dimension size of the two Datasets is different, but the coordinate of the second Dataset is a subset of the first one. Therefore, I reimplemented the check that I had previously in a similar way. I also added a check for the wrong order of the dimensions, that you mentioned @shoyer.\r\nIf, despite the checks, an error occurs during the assignment, e.g. due to a type error, and the dataset has been updated already partially, the user is informed about this.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-831737517", "https://api.github.com/repos/pydata/xarray/issues/5045", 831737517, "MDEyOklzc3VlQ29tbWVudDgzMTczNzUxNw==", 17001470, "2021-05-04T07:26:35Z", "2021-05-04T07:26:35Z", "CONTRIBUTOR", "@shoyer thanks for your suggestions! I included them as best as I could.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-831718694", "https://api.github.com/repos/pydata/xarray/issues/5045", 831718694, "MDEyOklzc3VlQ29tbWVudDgzMTcxODY5NA==", 5635139, "2021-05-04T06:46:46Z", "2021-05-04T06:46:46Z", "MEMBER", "> ok, I deleted the copy stuff and included a few checks to catch possible errors before setting the values. Did I miss anything? How do we check for \"type errors that don't coerce\", as you mentioned?\r\n\r\nExcellent. Re the checks \u2014 I mostly meant that it was going to be very rare for something to get through \u2014\u00a0I don't think it's necessary to check for something like \"type errors that don't coerce\". ", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-831100015", "https://api.github.com/repos/pydata/xarray/issues/5045", 831100015, "MDEyOklzc3VlQ29tbWVudDgzMTEwMDAxNQ==", 17001470, "2021-05-03T08:11:39Z", "2021-05-03T08:11:39Z", "CONTRIBUTOR", "ok, I deleted the copy stuff and included a few checks to catch possible errors before setting the values. Did I miss anything? How do we check for \"type errors that don't coerce\", as you mentioned? \r\nThe setitem method of the LocIndexer now calls the setitem method of the Dataset class, so that we don't have redundant code.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-830229757", "https://api.github.com/repos/pydata/xarray/issues/5045", 830229757, "MDEyOklzc3VlQ29tbWVudDgzMDIyOTc1Nw==", 5635139, "2021-04-30T16:57:53Z", "2021-04-30T16:58:09Z", "MEMBER", "Which errors would `__getitem__` miss? At least type errors that don't coerce; are there other cases?\r\n\r\nThe issue with a deep copy of the whole dataset is that it's very expensive. It's probably better to have that rather than nothing, but it could have confusing performance effects given that people are often going to be mutating values to reduce copies.\r\n\r\nThese aren't strongly held views though. Any thoughts from others? ", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-829864953", "https://api.github.com/repos/pydata/xarray/issues/5045", 829864953, "MDEyOklzc3VlQ29tbWVudDgyOTg2NDk1Mw==", 17001470, "2021-04-30T06:10:56Z", "2021-04-30T06:10:56Z", "CONTRIBUTOR", "Calling  __getitem__ is not enough to detect all possible errors, I guess. Another possibility would be to do a deep copy before the assignments, and if anything goes wrong, restore the original data from the copy. In this way, the assignments do not have to be done twice unless an error appears.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-822467179", "https://api.github.com/repos/pydata/xarray/issues/5045", 822467179, "MDEyOklzc3VlQ29tbWVudDgyMjQ2NzE3OQ==", 5635139, "2021-04-19T13:29:07Z", "2021-04-19T13:29:07Z", "MEMBER", "Great, this is shaping up.\n\nI think we can find a way of failing early on bad indexes without attempting the whole operation on a copy.\n\nAt the very least, we could call `__getitem__` with the indexes and see whether that passes. There may be better ways yet.\n\nI also think that because the currently proposed code uses a shallow copy, it may be mutating the original when bad indexes are passed \u2014 it's worth adding a test to confirm.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-822337622", "https://api.github.com/repos/pydata/xarray/issues/5045", 822337622, "MDEyOklzc3VlQ29tbWVudDgyMjMzNzYyMg==", 14808389, "2021-04-19T09:52:45Z", "2021-04-19T09:52:45Z", "MEMBER", "that's a flaky test which randomly fails (see #4539). You can safely ignore it, the CI should pass on the next run.", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-822332988", "https://api.github.com/repos/pydata/xarray/issues/5045", 822332988, "MDEyOklzc3VlQ29tbWVudDgyMjMzMjk4OA==", 17001470, "2021-04-19T09:46:29Z", "2021-04-19T09:46:29Z", "CONTRIBUTOR", "I don't understand what's the issue of the failing test. Do you?\r\n", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-822263369", "https://api.github.com/repos/pydata/xarray/issues/5045", 822263369, "MDEyOklzc3VlQ29tbWVudDgyMjI2MzM2OQ==", 17001470, "2021-04-19T08:04:49Z", "2021-04-19T08:04:49Z", "CONTRIBUTOR", "ok I tried to include your suggestions. Concerning @shoyer's point 3: Since I guess there could be a lot of different errors appearing, I created a copy of the data to be changed to check if the setitem fails before doing the actual update. That's of course suboptimal concerning the performance. What do you think, should we include checks for all conceivable errors or keep this test update?", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-822107419", "https://api.github.com/repos/pydata/xarray/issues/5045", 822107419, "MDEyOklzc3VlQ29tbWVudDgyMjEwNzQxOQ==", 1217238, "2021-04-19T01:21:22Z", "2021-04-19T01:21:22Z", "MEMBER", "Strong +1 from me on narrowing scope whenever possible. Adding features\nincrementally is much easier than doing things all at once :)\n\nOn Sun, Apr 18, 2021 at 6:15 PM Maximilian Roos ***@***.***>\nwrote:\n\n> ***@***.**** commented on this pull request.\n> ------------------------------\n>\n> In xarray/core/dataset.py\n> <https://github.com/pydata/xarray/pull/5045#discussion_r615490870>:\n>\n> > +            # loop over dataset variables\n> +            for var, da in self.items():\n> +                val = value\n> +                if type(value) == xr.core.dataset.Dataset:\n> +                    val = value[var]\n> +                # only set value if all dimensions are present\n> +                if all([k in da.dims for k in key.keys()]):\n> +                    da[key] = val\n>\n> V good point. Thanks.\n>\n> Either a) raising on missing dimensions or b) \"entirely set to zero\" in\n> this case, would be reasonable imo.\n>\n> To the extent you're less confident that (b) is correct, I'd suggest we\n> move forward with (a) and evaluating whether we should do (b) separately.\n>\n> (though @shoyer <https://github.com/shoyer> if you're up for it, let's\n> discuss the general pace of reviews next week and whether you think this\n> sort of \"narrowing\" of PR scope is a reasonable tactic)\n>\n> \u2014\n> You are receiving this because you were mentioned.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/pydata/xarray/pull/5045#discussion_r615490870>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAJJFVQR6Q6VI44EEY2SF43TJN7ZTANCNFSM4ZKTDBUA>\n> .\n>\n", "{\"total_count\": 1, \"+1\": 1, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-822055901", "https://api.github.com/repos/pydata/xarray/issues/5045", 822055901, "MDEyOklzc3VlQ29tbWVudDgyMjA1NTkwMQ==", 5635139, "2021-04-18T20:29:18Z", "2021-04-18T20:29:18Z", "MEMBER", "@matzegoebel forgive the very long delay on the review. We're planning to find a better system to ensure these don't drop through.\r\n\r\nI would be up for adding this, for consistency. I don't think I've ever needed the functionality, but it also doesn't make the interface more complicated given it's mirroring `__getitem__`.\r\n\r\nWe probably need to think through whether there are any corner cases here; I can't think of any atm.\r\n\r\nAny other thoughts?", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859], ["https://github.com/pydata/xarray/pull/5045#issuecomment-801083363", "https://api.github.com/repos/pydata/xarray/issues/5045", 801083363, "MDEyOklzc3VlQ29tbWVudDgwMTA4MzM2Mw==", 17001470, "2021-03-17T13:32:46Z", "2021-03-17T13:32:46Z", "CONTRIBUTOR", "I haven't updated the documentation yet, where it says that this feature is not supported yet. Do you think we need example code for this feature in the documentation?", "{\"total_count\": 0, \"+1\": 0, \"-1\": 0, \"laugh\": 0, \"hooray\": 0, \"confused\": 0, \"heart\": 0, \"rocket\": 0, \"eyes\": 0}", null, 833778859]], "truncated": false, "filtered_table_rows_count": 26, "expanded_columns": [], "expandable_columns": [[{"column": "issue", "other_table": "issues", "other_column": "id"}, "title"], [{"column": "user", "other_table": "users", "other_column": "id"}, "login"]], "columns": ["html_url", "issue_url", "id", "node_id", "user", "created_at", "updated_at", "author_association", "body", "reactions", "performed_via_github_app", "issue"], "primary_keys": ["id"], "units": {}, "query": {"sql": "select html_url, issue_url, id, node_id, user, created_at, updated_at, author_association, body, reactions, performed_via_github_app, issue from issue_comments where \"issue\" = :p0 order by updated_at desc limit 101", "params": {"p0": "833778859"}}, "facet_results": {"author_association": {"name": "author_association", "type": "column", "hideable": false, "toggle_url": "/github/issue_comments.json?issue=833778859", "results": [{"value": "CONTRIBUTOR", "label": "CONTRIBUTOR", "count": 13, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&author_association=CONTRIBUTOR", "selected": false}, {"value": "MEMBER", "label": "MEMBER", "count": 13, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&author_association=MEMBER", "selected": false}], "truncated": false}, "user": {"name": "user", "type": "column", "hideable": false, "toggle_url": "/github/issue_comments.json?issue=833778859", "results": [{"value": 17001470, "label": "matzegoebel", "count": 13, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&user=17001470", "selected": false}, {"value": 5635139, "label": "max-sixty", "count": 7, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&user=5635139", "selected": false}, {"value": 1217238, "label": "shoyer", "count": 3, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&user=1217238", "selected": false}, {"value": 14808389, "label": "keewis", "count": 2, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&user=14808389", "selected": false}, {"value": 2448579, "label": "dcherian", "count": 1, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&user=2448579", "selected": false}], "truncated": false}, "issue": {"name": "issue", "type": "column", "hideable": false, "toggle_url": "/github/issue_comments.json?issue=833778859", "results": [{"value": 833778859, "label": "Allow assigning values to a subset of a dataset", "count": 26, "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json", "selected": true}], "truncated": false}}, "suggested_facets": [{"name": "reactions", "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&_facet=reactions"}, {"name": "created_at", "type": "date", "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&_facet_date=created_at"}, {"name": "updated_at", "type": "date", "toggle_url": "http://xarray-datasette.fly.dev/github/issue_comments.json?issue=833778859&_facet_date=updated_at"}], "next": null, "next_url": null, "private": false, "allow_execute_sql": true, "query_ms": 16.35419111698866}