issues
3 rows where state = "open" and user = 3460034 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1532662115 | PR_kwDOAMm_X85HWWhx | 7437 | DRAFT: Implement `open_datatree` in BackendEntrypoint for preliminary DataTree support | jthielen 3460034 | open | 0 | 1 | 2023-01-13T17:17:41Z | 2023-07-31T10:09:18Z | CONTRIBUTOR | 1 | pydata/xarray/pulls/7437 | As discussed among folks at today's Pangeo working meeting (cc @jhamman, @TomNicholas), we are looking to try adding support for ```python import xarray as xr dt = xr.open_datatree("path/to/gribfile.grib", engine="cfgrib") ``` given that Working Design Doc: https://hackmd.io/Oqeab-54TqOOHd5FdCb5DQ?edit xref https://github.com/ecmwf/cfgrib/issues/327, https://github.com/openradar/xradar/issues/7
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7437/reactions", "total_count": 6, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 5 } |
xarray 13221727 | pull | ||||||
673682661 | MDU6SXNzdWU2NzM2ODI2NjE= | 4313 | Using Dependabot to manage doc build and CI versions | jthielen 3460034 | open | 0 | 4 | 2020-08-05T16:24:24Z | 2022-04-09T02:59:21Z | CONTRIBUTOR | As brought up on the bi-weekly community developers meeting, it sounds like Pandas v1.1.0 is breaking doc builds on RTD. One solution to the issues of frequent breakages in doc builds and CI due to upstream updates is having fixed version lists for all of these, which are then incrementally updated as new versions come out. @dopplershift has done a lot of great work in MetPy getting such a workflow set up with Dependabot (https://github.com/Unidata/MetPy/pull/1410) among other CI updates, and this could be adapted for use here in xarray. We've generally been quite happy with our updated CI configuration with Dependabot over the past couple weeks. The only major issue has been https://github.com/Unidata/MetPy/issues/1424 / https://github.com/dependabot/dependabot-core/issues/2198#issuecomment-649726022, which has required some contributors to have to delete and recreate their forks in order for Dependabot to not auto-submit PRs to the forked repos. Any thoughts that you had here @dopplershift would be appreciated! xref https://github.com/pydata/xarray/issues/4287, https://github.com/pydata/xarray/pull/4296 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4313/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
596062033 | MDU6SXNzdWU1OTYwNjIwMzM= | 3950 | Consistent Handling of Type Casting Hierarchy | jthielen 3460034 | open | 0 | 0 | 2020-04-07T18:20:49Z | 2020-04-07T18:36:22Z | CONTRIBUTOR | As brought up in #3643, there appears to be some inconsistencies in how xarray handles other numeric/duck array types with regards to a well-defined type casting hierarchy across operations. For example, in the following: Construction/Wrapping
Binary Ops
(would be one less category to worry about if refactored to use
One concrete example of where this has been problematic is with xarray DataArrays and Pint Quantities (#3643). xarray DataArray is above Pint Quantity in the (generally agreed upon) type casting hierarchy, and wrapping and binary ops work properly since Pint Quantities defer and xarray DataArrays handle the operation. However, ufuncs fail because they both attempt to defer to the other. Having a consistent way of handling type compatibility across all relevant areas in xarray should be able to remove these kinds of issues. However, it would be good to keep in mind that an agreed upon way of how to do this in the broader ecosystem doesn't seem to be there yet, so this would still be treading in uncertain waters for the moment. I've been operating under these assumptions when working with Pint, but I definitely think there is a need for more authoritative guidance. Also, if I'm mistaken in any of the things mentioned above, please do let me know! cc @keewis, @shoyer |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3950/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);