home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

5 rows where user = 3460034 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 3
  • pull 2

state 2

  • open 3
  • closed 2

repo 1

  • xarray 5
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1532662115 PR_kwDOAMm_X85HWWhx 7437 DRAFT: Implement `open_datatree` in BackendEntrypoint for preliminary DataTree support jthielen 3460034 open 0     1 2023-01-13T17:17:41Z 2023-07-31T10:09:18Z   CONTRIBUTOR   1 pydata/xarray/pulls/7437

As discussed among folks at today's Pangeo working meeting (cc @jhamman, @TomNicholas), we are looking to try adding support for DataTree in the Backend API, so that backend engines can readily add DataTree capability. For example, with cfgrib, we could have

```python import xarray as xr

dt = xr.open_datatree("path/to/gribfile.grib", engine="cfgrib") ```

given that cfgrib implements the appropriate method to their BackendEntrypoint subclass. Similarly, with NetCDF files or Zarr stores with groups, we could open as DataTree to obviate the need to specify a single group.

Working Design Doc: https://hackmd.io/Oqeab-54TqOOHd5FdCb5DQ?edit

xref https://github.com/ecmwf/cfgrib/issues/327, https://github.com/openradar/xradar/issues/7

  • ~~Closes #xxxx~~
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7437/reactions",
    "total_count": 6,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 5
}
    xarray 13221727 pull
673682661 MDU6SXNzdWU2NzM2ODI2NjE= 4313 Using Dependabot to manage doc build and CI versions jthielen 3460034 open 0     4 2020-08-05T16:24:24Z 2022-04-09T02:59:21Z   CONTRIBUTOR      

As brought up on the bi-weekly community developers meeting, it sounds like Pandas v1.1.0 is breaking doc builds on RTD. One solution to the issues of frequent breakages in doc builds and CI due to upstream updates is having fixed version lists for all of these, which are then incrementally updated as new versions come out. @dopplershift has done a lot of great work in MetPy getting such a workflow set up with Dependabot (https://github.com/Unidata/MetPy/pull/1410) among other CI updates, and this could be adapted for use here in xarray.

We've generally been quite happy with our updated CI configuration with Dependabot over the past couple weeks. The only major issue has been https://github.com/Unidata/MetPy/issues/1424 / https://github.com/dependabot/dependabot-core/issues/2198#issuecomment-649726022, which has required some contributors to have to delete and recreate their forks in order for Dependabot to not auto-submit PRs to the forked repos.

Any thoughts that you had here @dopplershift would be appreciated!

xref https://github.com/pydata/xarray/issues/4287, https://github.com/pydata/xarray/pull/4296

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4313/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
653430454 MDU6SXNzdWU2NTM0MzA0NTQ= 4208 Support for duck Dask Arrays jthielen 3460034 closed 0     18 2020-07-08T16:23:12Z 2020-09-02T18:28:12Z 2020-09-02T18:28:12Z CONTRIBUTOR      

https://github.com/pydata/xarray/issues/525#issuecomment-531603357 raised the idea of adding "duck Dask Array" support to xarray as a way to handle xarray > Pint Quantity > Dask Array wrapping in a way that still allowed for most of xarray's Dask integration to work properly. With @rpmanser working on implementing the Dask collection interface in Pint (https://github.com/hgrecco/pint/pull/1129), I thought it best to elevate this to its own issue to track progress and discuss implementation on xarray's side (since hopefully @rpmanser or I can get started on it soon).

Two initial (and intertwined) discussion points that I'd like to bring up (xref https://github.com/hgrecco/pint/pull/1129#issuecomment-655197079):

  • How should xarray check for a duck Dask Array?
  • Is it acceptable for a Pint Quantity to always have the Dask collection interface defined (i.e., be a duck Dask array), even when its magnitude (what it wraps) is not a Dask Array?

cc @keewis, @shoyer, @crusaderky

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4208/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
596062033 MDU6SXNzdWU1OTYwNjIwMzM= 3950 Consistent Handling of Type Casting Hierarchy jthielen 3460034 open 0     0 2020-04-07T18:20:49Z 2020-04-07T18:36:22Z   CONTRIBUTOR      

As brought up in #3643, there appears to be some inconsistencies in how xarray handles other numeric/duck array types with regards to a well-defined type casting hierarchy across operations. For example, in the following:

Construction/Wrapping

  • Allows
  • xarray.core.indexing.ExplicitlyIndexed
  • pandas.Index
  • Dask array
  • __array_function__ implementers
  • Automatically converts
  • Anything with a values attribute to its values
  • Datetime-like array types
  • Masked arrays
  • Anything else for which np.asarray(data) is valid
  • Doesn't reject any type when trying to wrap (for an upcast type such as a HoloViews Dataset, this may be needed?)

Binary Ops

  • Defers based on xarray's internal hierarchy (Dataset, DataArray, Variable), otherwise relies upon methods of underlying data, and then wraps result.

(would be one less category to worry about if refactored to use __array_ufunc__, see https://github.com/pydata/xarray/pull/3936#issuecomment-610516784)

__array_ufunc__

  • Allows a list of supported types https://github.com/pydata/xarray/blob/9b5140e0711247c373987b56726282140b406d7f/xarray/core/arithmetic.py#L24-L30 along with SupportsArithmetic
  • Defers to all other types

__array_function__

  • To be implemented (https://github.com/pydata/xarray/issues/3917)

One concrete example of where this has been problematic is with xarray DataArrays and Pint Quantities (#3643). xarray DataArray is above Pint Quantity in the (generally agreed upon) type casting hierarchy, and wrapping and binary ops work properly since Pint Quantities defer and xarray DataArrays handle the operation. However, ufuncs fail because they both attempt to defer to the other. Having a consistent way of handling type compatibility across all relevant areas in xarray should be able to remove these kinds of issues.

However, it would be good to keep in mind that an agreed upon way of how to do this in the broader ecosystem doesn't seem to be there yet, so this would still be treading in uncertain waters for the moment. I've been operating under these assumptions when working with Pint, but I definitely think there is a need for more authoritative guidance.

Also, if I'm mistaken in any of the things mentioned above, please do let me know!

cc @keewis, @shoyer

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3950/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
508171906 MDExOlB1bGxSZXF1ZXN0MzI5MDE3OTU2 3410 Update Terminology page to account for multidimensional coordinates jthielen 3460034 closed 0     1 2019-10-17T00:52:12Z 2019-10-24T04:25:43Z 2019-10-24T04:25:43Z CONTRIBUTOR   0 pydata/xarray/pulls/3410

As discussed in https://github.com/pydata/xarray/pull/3352, this PR modifies the Terminology page in the docs to briefly address multidimensional coordinates. Sorry for the delay in getting this in!

Also, when attempting to test the doc build, I found that the doc/environment.yml file was no longer present, so I updated it to ci/requirements/doc.yml.

  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3410/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 21.817ms · About: xarray-datasette