issues
10 rows where repo = 13221727 and user = 167802 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1778486450 | PR_kwDOAMm_X85UHfL4 | 7948 | Implement preferred_chunks for netcdf 4 backends | mraspaud 167802 | closed | 0 | 10 | 2023-06-28T08:43:30Z | 2023-09-12T09:01:03Z | 2023-09-11T23:05:49Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/7948 | According to the This PR adds this attribute to the encoding upon data reading. This results in
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
277441150 | MDU6SXNzdWUyNzc0NDExNTA= | 1743 | Assigning data to vector-indexed data doesn't seem to work | mraspaud 167802 | closed | 0 | 4 | 2017-11-28T16:06:56Z | 2022-02-23T12:23:42Z | 2017-12-09T03:29:35Z | CONTRIBUTOR | Code Sample```python import xarray as xr import numpy as np import dask.array as da arr = np.arange(25).reshape((5, 5)) l_indices = xr.DataArray(np.array(((0, 1), (2, 3))), dims=['lines', 'cols']) c_indices = xr.DataArray(np.array(((1, 3), (0, 2))), dims=['lines', 'cols']) xarr = xr.DataArray(da.from_array(arr, chunks=10), dims=['y', 'x']) print(xarr[l_indices, c_indices]) xarr[l_indices, c_indices] = 2 ``` Problem descriptionThis crashes on the last line with a
Expected OutputExpected output is the modified array with 2's in the indicated positions Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
510892578 | MDU6SXNzdWU1MTA4OTI1Nzg= | 3433 | Attributes are dropped after `clip` even if `keep_attrs` is True | mraspaud 167802 | closed | 0 | 5 | 2019-10-22T20:32:44Z | 2020-10-14T16:29:52Z | 2020-10-14T16:29:52Z | CONTRIBUTOR | MCVE Code Sample```python import xarray as xr import numpy as np arr = xr.DataArray(np.ones((5, 5)), attrs={'units': 'K'}) xr.set_options(keep_attrs=True) arr <xarray.DataArray (dim_0: 5, dim_1: 5)>array([[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]])Dimensions without coordinates: dim_0, dim_1Attributes:units: Karr.clip(0, 1) <xarray.DataArray (dim_0: 5, dim_1: 5)>array([[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.],[1., 1., 1., 1., 1.]])Dimensions without coordinates: dim_0, dim_1``` Expected OutputI would expect the attributes to be kept Problem Description
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
559645981 | MDU6SXNzdWU1NTk2NDU5ODE= | 3746 | dataarray arithmetics restore removed coordinates in xarray 0.15 | mraspaud 167802 | closed | 0 | 5 | 2020-02-04T11:06:40Z | 2020-03-21T19:03:51Z | 2020-03-21T19:03:51Z | CONTRIBUTOR | MCVE Code Sample```python import xarray as xr import numpy as np arr2 = xr.DataArray(np.ones((2, 2)), dims=['y', 'x']) arr1 = xr.DataArray(np.ones((2, 2)), dims=['y', 'x'], coords={'y': [0, 1], 'x': [0, 1]}) del arr1.coords['y'] del arr1.coords['x'] shows arr1 without coordinatesarr1 shows coordinates in xarray 0.15arr1 * arr2 ``` Expected Output
Problem DescriptionIn xarray 0.15, the coordinates are restored when doing the multiplication:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
495198361 | MDU6SXNzdWU0OTUxOTgzNjE= | 3317 | Can't create weakrefs on DataArrays since xarray 0.13.0 | mraspaud 167802 | closed | 0 | crusaderky 6213168 | 8 | 2019-09-18T12:36:46Z | 2019-10-14T21:38:09Z | 2019-09-18T15:53:51Z | CONTRIBUTOR | MCVE Code Sample
Expected OutputI expect the weak reference to be created as in former versions Problem DescriptionThe above code raises the following exception:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | |||||
387732534 | MDExOlB1bGxSZXF1ZXN0MjM2MTU0NTUz | 2591 | Fix h5netcdf saving scalars with filters or chunks | mraspaud 167802 | closed | 0 | 8 | 2018-12-05T12:22:40Z | 2018-12-11T07:27:27Z | 2018-12-11T07:24:36Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2591 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
383667887 | MDU6SXNzdWUzODM2Njc4ODc= | 2563 | Scalars from netcdf dataset can't be written with h5netcdf | mraspaud 167802 | closed | 0 | 1 | 2018-11-22T22:44:48Z | 2018-12-11T07:24:36Z | 2018-12-11T07:24:36Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possibleA "Minimal, Complete and Verifiable Example" will make it much easier for maintainers to help you: http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports ```python import xarray as xr from netCDF4 import Dataset def write_netcdf(filename,zlib,least_significant_digit,data,dtype='f4',shuffle=False,contiguous=False,\ chunksizes=None,complevel=6,fletcher32=False): file = Dataset(filename,'w') file.createDimension('n', 1) foo = file.createVariable('data',\ dtype,('n'),zlib=zlib,least_significant_digit=least_significant_digit,\ shuffle=shuffle,contiguous=contiguous,complevel=complevel,fletcher32=fletcher32,chunksizes=chunksizes) foo[:] = data file.close() write_netcdf("mydatafile.nc",True,None,0.0,shuffle=True, chunksizes=(1,)) data = xr.open_dataset('mydatafile.nc') arr = data['data'] arr[0].to_netcdf('mytestfile.nc', mode='w', engine='h5netcdf') ``` Problem descriptionThe above example crashes with a TypeError since xarray 0.10.4 (works before, hence reporting the error here and not in eg. h5netcdf):
The problem here is that it is not anymore possible to squeeze an array that comes from a netcdf file that was compressed or filtered. Expected OutputThe expected output is that the creation of the trimmed netcdf file works. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
255989233 | MDU6SXNzdWUyNTU5ODkyMzM= | 1560 | DataArray.unstack taking unreasonable amounts of memory | mraspaud 167802 | closed | 0 | 11 | 2017-09-07T16:01:50Z | 2018-08-15T00:18:28Z | 2018-08-15T00:18:28Z | CONTRIBUTOR | Hi, While trying to support DataArrays in pyresample, I stumble upon what seems to me to be a bug. It looks like unstacking a dimension takes unreasonable amounts of memory. For example: ```python from xarray import DataArray import numpy as np arr = DataArray(np.empty([1, 8996, 9223])).stack(flat_dim=['dim_1', 'dim_2']) print(arr) arr.unstack('flat_dim') ``` peaks at about 8GB of my memory (in top), while the array in itself isn't supposed to take more than 635MB approximately. I know my measuring method is not very accurate, but should it be this way ? As a side note, the unstacking also takes a very long time. What is going on under the hood ? Martin |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1560/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
296673404 | MDU6SXNzdWUyOTY2NzM0MDQ= | 1906 | Coordinate attributes as DataArray type doesn't export to netcdf | mraspaud 167802 | closed | 0 | 5 | 2018-02-13T09:48:53Z | 2018-02-26T09:34:24Z | 2018-02-26T09:34:24Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python import numpy as np import xarray as xr arr = xr.DataArray([[1, 2, 3]], dims=['time', 'x']) arr['time'] = np.array([1]) time_bnds = xr.DataArray([0, 1], dims='time_bounds') arr['time'].attrs['bounds'] = time_bnds dataset = xr.Dataset({'arr': arr, 'time_bnds': time_bnds}) dataset.to_netcdf('time_bnd.nc') ``` Problem descriptionThis code produces a TypeError
Expected outputI would expect an output like this (ncdump -h):
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
289972054 | MDU6SXNzdWUyODk5NzIwNTQ= | 1842 | DataArray read from netcdf with unexpected type | mraspaud 167802 | closed | 0 | 1 | 2018-01-19T13:15:11Z | 2018-01-23T20:15:29Z | 2018-01-23T20:15:29Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python import numpy as np import h5netcdf filename = "mask_and_scale_float32.nc" with h5netcdf.File(filename, 'w') as f: f.dimensions = {'x': 5} v = f.create_variable('hello', ('x',), dtype=np.uint16) v[:] = np.ones(5, dtype=np.uint16) v[0] = np.uint16(65535) v.attrs['_FillValue'] = np.uint16(65535) v.attrs['scale_factor'] = np.float32(2) v.attrs['add_offset'] = np.float32(0.5) import xarray as xr v = xr.open_dataset(filename, mask_and_scale=True)['hello'] print(v.dtype) ``` Problem descriptionThe Expected Outputfloat32 Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);