issues
11 rows where user = 45271239 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2152779535 | PR_kwDOAMm_X85n15Fl | 8784 | Do not attempt to broadcast when global option ``arithmetic_broadcast=False`` | etienneschalk 45271239 | closed | 0 | 1 | 2024-02-25T14:00:57Z | 2024-03-13T15:36:34Z | 2024-03-13T15:36:34Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8784 | Follow-up PR after #8698
MotiveRefer to #8698 for history In this PR more specifically:
UnrelatedAlso adds a decorator to handle the optional dependency |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2140225209 | PR_kwDOAMm_X85nLLgJ | 8761 | Use ruff for formatting | etienneschalk 45271239 | open | 0 | 10 | 2024-02-17T16:04:18Z | 2024-02-27T20:11:57Z | CONTRIBUTOR | 1 | pydata/xarray/pulls/8761 |
Note: many inline |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2140173727 | I_kwDOAMm_X85_kHWf | 8760 | Use `ruff` for formatting | etienneschalk 45271239 | open | 0 | 0 | 2024-02-17T15:07:17Z | 2024-02-26T05:58:53Z | CONTRIBUTOR | What is your issue?Use
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2127814221 | PR_kwDOAMm_X85mhHB1 | 8729 | Reinforce alignment checks when `join='exact'` | etienneschalk 45271239 | closed | 0 | 0 | 2024-02-09T20:36:46Z | 2024-02-25T12:51:54Z | 2024-02-25T12:51:54Z | CONTRIBUTOR | 1 | pydata/xarray/pulls/8729 | :information_source: Companion PR of #8698 Aims to check the consequences of transforming |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2116618415 | PR_kwDOAMm_X85l7Cdb | 8698 | New alignment option: `join='strict'` | etienneschalk 45271239 | closed | 0 | 5 | 2024-02-03T17:58:43Z | 2024-02-25T09:09:37Z | 2024-02-25T09:09:37Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8698 | Title: New alignment option:
MotiveThis PR is motivated by solving of the following issues:
The current PR does not solve the unexpected issue described in #8231 without a change in user-code. Indeed, in the tests written, it is shown that to get the said expected behavior, the user would have to use the new This may not be enough to fix #8231. If that isn't, I can split the PR into two, first one for adding the Technical DetailsI try to detail here my thought process. Please correct me if there is anything wrong. This is my first time digging into this core logic! Here is my understanding of the terms:
Input data for Scenario 1, tested in ```python ds1 = Dataset( coords={ "x_center": ("x_center", [1, 2, 3]), "x_outer": ("x_outer", [0.5, 1.5, 2.5, 3.5]), }, )
``` Input data for Scenario 2, tested in ```python ds1 = Dataset( data_vars={ "a": ("x_center", [1, 2, 3]), "b": ("x_outer", [0.5, 1.5, 2.5, 3.5]), }, )
``` Logic for non-indexed dimensions logic was working "as expected", as it relies on However, the logic for indexed dimensions was surprising as such an expected check on dimensions' sizes was not performed. A check exists in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2117187646 | PR_kwDOAMm_X85l85Qf | 8702 | Add a simple `nbytes` representation in DataArrays and Dataset `repr` | etienneschalk 45271239 | closed | 0 | 23 | 2024-02-04T16:37:41Z | 2024-02-20T11:15:51Z | 2024-02-07T20:47:37Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8702 | Edit: in contrary to what the title suggest, this is not an opt-in feature, it is enabled by default
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8702/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2140968762 | I_kwDOAMm_X85_nJc6 | 8763 | Documentation 404 not found for "Suggest Edit" link in "API Reference" pages | etienneschalk 45271239 | open | 0 | 0 | 2024-02-18T12:39:25Z | 2024-02-18T12:39:25Z | CONTRIBUTOR | What happened?Concrete example: let's say I am currently reading the documentation of DataArray.resample. I would like to have a look at the internals and see the code directly on GitHub. We can see a GitHub icon, with 3 links: - Repositry: leads to the home page of the repo: https://github.com/pydata/xarray - Suggest edit: leads to a 404 not found as it points to the generated documentation - Open issue (generic link to open an issue) The What did you expect to happen?The second link "Suggest edit" should actually lead to the source code, as the documentation is auto-generated from the docstrings themselves. Maybe it could be renamed like "View source" Example of other repos having this feature: Minimal Complete Verifiable Example
MVCE confirmation
Relevant log output
Anything else we need to know?No response EnvironmentN/A |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2135262747 | I_kwDOAMm_X85_RYYb | 8749 | Lack of resilience towards missing `_ARRAY_DIMENSIONS` xarray's special zarr attribute #280 | etienneschalk 45271239 | open | 0 | 2 | 2024-02-14T21:52:34Z | 2024-02-15T19:15:59Z | CONTRIBUTOR | What is your issue?Original issue: https://github.com/xarray-contrib/datatree/issues/280 Note: this issue description was generated from a notebook. You can use it to reproduce locally the bug. Lack of resilience towards missing
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2117299976 | I_kwDOAMm_X85-M28I | 8705 | More granularity in the CI, separating code and docs changes? | etienneschalk 45271239 | open | 0 | 7 | 2024-02-04T20:54:30Z | 2024-02-15T14:51:12Z | CONTRIBUTOR | What is your issue?Hi, TLDR: Is there a way to only run relevant CI checks (eg documentation) when a new commit is pushed on a PR's branch? The following issue is written from a naive user point of view. Indeed I do not know how the CI works on this project. I constated that when updating an existing Pull Request, the whole test battery is re-executed. However, it is a common scenario that someone wants to update only the documentation, for instance. In that case, it might make sense to only retrigger the documentation checks. A little bit like Another separation would be to have an "order" / "dependency system" in the pipeline. Eg, There is also a notion of frequency and execution time: pipelines' stages that are the most empirically likely to fail and the shortest to runshould be ran first, to avoid having them fail due to flakiness and out of bad luck when all the other checks passed before. Such a stage exists: |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
2128415253 | I_kwDOAMm_X85-3QoV | 8732 | Failing doctest CI Job: `The current Dask DataFrame implementation is deprecated.` | etienneschalk 45271239 | closed | 0 | 1 | 2024-02-10T13:12:23Z | 2024-02-10T23:44:25Z | 2024-02-10T23:44:25Z | CONTRIBUTOR | What happened?The doctest CI job for my Pull Request failed. The failure seems at first glance to be unrelated to my code changes. It seems related to a Dask warning. Note: I create this issue for logging purposes ; it might become relevant only once another unrelated PR is subject to the same bug. What did you expect to happen?I expected the
(the command is taken from the CI definition file: https://github.com/pydata/xarray/actions/runs/7854959732/workflow?pr=8698#L83) Minimal Complete Verifiable Example
MVCE confirmation
Relevant log output```Python =================================== FAILURES =================================== _ [doctest] xarray.core.dataarray.DataArray.to_dask_dataframe __________ 7373 ... dims=("time", "lat", "lon"), 7374 ... coords={ 7375 ... "time": np.arange(4), 7376 ... "lat": [-30, -20], 7377 ... "lon": [120, 130], 7378 ... }, 7379 ... name="eg_dataarray", 7380 ... attrs={"units": "Celsius", "description": "Random temperature data"}, 7381 ... ) 7382 >>> da.to_dask_dataframe(["lat", "lon", "time"]).compute() UNEXPECTED EXCEPTION: DeprecationWarning("The current Dask DataFrame implementation is deprecated. \nIn a future release, Dask DataFrame will use new implementation that\ncontains several improvements including a logical query planning.\nThe user-facing DataFrame API will remain unchanged.\n\nThe new implementation is already available and can be enabled by\ninstalling the dask-expr library:\n\n $ pip install dask-expr\n\nand turning the query planning option on:\n\n >>> import dask\n >>> dask.config.set({'dataframe.query-planning': True})\n >>> import dask.dataframe as dd\n\nAPI documentation for the new implementation is available at\nhttps://docs.dask.org/en/stable/dask-expr-api.html\n\nAny feedback can be reported on the Dask issue tracker\nhttps://github.com/dask/dask/issues \n") Traceback (most recent call last): File "/home/runner/micromamba/envs/xarray-tests/lib/python3.11/doctest.py", line 1353, in __run exec(compile(example.source, filename, "single", File "<doctest xarray.core.dataarray.DataArray.to_dask_dataframe[1]>", line 1, in <module> File "/home/runner/work/xarray/xarray/xarray/core/dataarray.py", line 7408, in to_dask_dataframe return ds.to_dask_dataframe(dim_order, set_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/work/xarray/xarray/xarray/core/dataset.py", line 7369, in to_dask_dataframe import dask.dataframe as dd File "/home/runner/micromamba/envs/xarray-tests/lib/python3.11/site-packages/dask/dataframe/__init.py", line 162, in <module> warnings.warn( DeprecationWarning: The current Dask DataFrame implementation is deprecated. In a future release, Dask DataFrame will use new implementation that contains several improvements including a logical query planning. The user-facing DataFrame API will remain unchanged. The new implementation is already available and can be enabled by installing the dask-expr library:
and turning the query planning option on:
API documentation for the new implementation is available at https://docs.dask.org/en/stable/dask-expr-api.html Any feedback can be reported on the Dask issue tracker https://github.com/dask/dask/issues /home/runner/work/xarray/xarray/xarray/core/dataarray.py:7382: UnexpectedException =========================== short test summary info ============================ FAILED xarray/core/dataarray.py::xarray.core.dataarray.DataArray.to_dask_dataframe ============= 1 failed, 301 passed, 2 skipped in 78.04s (0:01:18) ============== Error: Process completed with exit code 1. ``` Anything else we need to know?No response EnvironmentN/A |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2123948734 | PR_kwDOAMm_X85mT5_9 | 8719 | Test formatting platform | etienneschalk 45271239 | closed | 0 | 2 | 2024-02-07T21:41:23Z | 2024-02-09T03:01:35Z | 2024-02-09T03:01:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8719 | Follow up #8702 / https://github.com/pydata/xarray/pull/8702#issuecomment-1932851112 The goal is to remove the not elegant OS-dependant checks introduced during the testing of #8702 A simple way to do so is to use unsigned integer as dtypes for tests involving data array representations on multiple OSes. Indeed, this solves the issue of the default dtypes being not printed in the repr, with default dtyps varying according to the OS. The tests show that the concerned dtypes are ~~- [ ] Closes #xxxx~~
- [x] Tests added
~~- [ ] User visible changes (including notable bug fixes) are documented in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);