issues
6 rows where state = "open" and user = 1828519 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2244518111 | PR_kwDOAMm_X85suNEO | 8946 | Fix upcasting with python builtin numbers and numpy 2 | djhoese 1828519 | open | 0 | 18 | 2024-04-15T20:07:42Z | 2024-04-29T12:38:55Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/8946 | See #8402 for more discussion. Bottom line is that numpy 2 changes the rules for casting between two inputs. Due to this and xarray's preference for promoting python scalars to 0d arrays (scalar arrays), xarray objects are being upcast to higher data types when they previously didn't. I'm mainly opening this PR for further and more detailed discussion. CC @dcherian
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1974350560 | I_kwDOAMm_X851rjLg | 8402 | `where` dtype upcast with numpy 2 | djhoese 1828519 | open | 0 | 10 | 2023-11-02T14:12:49Z | 2024-04-15T19:18:49Z | CONTRIBUTOR | What happened?I'm testing my code with numpy 2.0 and current Doing The main problem seems to come down to: As this converts my scalar input ```python import numpy as np a = np.zeros((2, 2), dtype=np.uint16) what I'm intending to do with my xarray
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1750685808 | PR_kwDOAMm_X85SqoXL | 7905 | Add '.hdf' extension to 'netcdf4' backend | djhoese 1828519 | open | 0 | 10 | 2023-06-10T00:45:15Z | 2023-06-14T15:25:08Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7905 | I'm helping @joleenf debug an issue where some old code that uses However, with What do people think? I didn't want to put any more work into this until others weighed in.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
573031381 | MDU6SXNzdWU1NzMwMzEzODE= | 3813 | Xarray operations produce read-only array | djhoese 1828519 | open | 0 | 7 | 2020-02-28T22:07:59Z | 2023-03-22T15:11:14Z | CONTRIBUTOR | I've turned on testing my Satpy package with unstable or pre-releases of some of our dependencies including numpy and xarray. I've found one error so far where in previous versions of xarray it was possible to assign to the numpy array taken from a DataArray. MCVE Code Sample```python import numpy as np import dask.array as da import xarray as xr data = np.arange(15, 301, 15).reshape(2, 10) data_arr = xr.DataArray(data, dims=('y', 'x'), attrs={'test': 'test'}) data_arr = data_arr.copy() data_arr = data_arr.expand_dims('bands') data_arr['bands'] = ['L'] n_arr = np.asarray(data_arr.data) n_arr[n_arr == 45] = 5 ``` Which results in: ```ValueError Traceback (most recent call last) <ipython-input-12-90dae37dd808> in <module> ----> 1 n_arr = np.asarray(data_arr.data); n_arr[n_arr == 45] = 5 ValueError: assignment destination is read-only ``` Expected OutputA writable array. No error. Problem DescriptionIf this is expected new behavior then so be it, but wanted to check with the xarray devs before I tried to work around it. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
341331807 | MDU6SXNzdWUzNDEzMzE4MDc= | 2288 | Add CRS/projection information to xarray objects | djhoese 1828519 | open | 0 | 45 | 2018-07-15T16:02:55Z | 2022-10-14T20:27:26Z | CONTRIBUTOR | Problem descriptionThis issue is to start the discussion for a feature that would be helpful to a lot of people. It may not necessarily be best to put it in xarray, but let's figure that out. I'll try to describe things below to the best of my knowledge. I'm typically thinking of raster/image data when it comes to this stuff, but it could probably be used for GIS-like point data. Geographic data can be projected (uniform grid) or unprojected (nonuniform). Unprojected data typically has longitude and latitude values specified per-pixel. I don't think I've ever seen non-uniform data in a projected space. Projected data can be specified by a CRS (PROJ.4), a number of pixels (shape), and extents/bbox in CRS units (xmin, ymin, xmax, ymax). This could also be specified in different ways like origin (X, Y) and pixel size. Seeing as xarray already computes all So the question is: Should these properties be standardized in xarray Dataset/DataArray objects and how? Related libraries and developers
I know @WeatherGod also showed interest on gitter. Complications and things to consider
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2288/reactions", "total_count": 14, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
449840662 | MDU6SXNzdWU0NDk4NDA2NjI= | 2996 | Checking non-dimensional coordinates for equality | djhoese 1828519 | open | 0 | 3 | 2019-05-29T14:24:41Z | 2021-03-02T05:08:32Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possibleI'm working on a proof-of-concept for the I'm having trouble deciding what the best place is for this CRS information so that it benefits the user; ```python from pyproj import CRS import xarray as xr import dask.array as da crs1 = CRS.from_string('+proj=lcc +datum=WGS84 +lon_0=-95 +lat_0=25 +lat_1=25') crs2 = CRS.from_string('+proj=lcc +datum=WGS84 +lon_0=-95 +lat_0=35 +lat_1=35') a = xr.DataArray(da.zeros((5, 5), chunks=2), dims=('y', 'x'), coords={'y': da.arange(1, 6, chunks=3), 'x': da.arange(2, 7, chunks=3), 'crs': crs1, 'test': 1, 'test2': 2}) b = xr.DataArray(da.zeros((5, 5), chunks=2), dims=('y', 'x'), coords={'y': da.arange(1, 6, chunks=3), 'x': da.arange(2, 7, chunks=3), 'crs': crs2, 'test': 2, 'test2': 2}) a + b Results in:<xarray.DataArray 'zeros-e5723e7f9121b7ac546f61c19dabe786' (y: 5, x: 5)>dask.array<shape=(5, 5), dtype=float64, chunksize=(2, 2)>Coordinates:* y (y) int64 1 2 3 4 5* x (x) int64 2 3 4 5 6test2 int64 2``` In the above code I was hoping that because the Any ideas for how I might be able to accomplish something like this? I'm not an expert on xarray/pandas indexes, but could this be another possible solution? Edit: |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);