issues
4 rows where repo = 13221727 and user = 4849151 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
304314787 | MDU6SXNzdWUzMDQzMTQ3ODc= | 1982 | NetCDF coordinates in parent group is not used when reading sub group | jacklovell 4849151 | open | 0 | 10 | 2018-03-12T10:26:54Z | 2021-12-27T18:19:22Z | NONE | Code Sample, a copy-pastable example if possible```python ncfile_cf = "x07z00017_cf.nc" with xr.open_dataset(ncfile_cf, group="x07") as ds: ds_data_cf = ds.copy(deep=True) print(ds_data_cf) <xarray.Dataset> Dimensions: (time1: 100000, time2: 2) Dimensions without coordinates: time1, time2 Data variables: aps (time1) float64 ... iact (time1) float64 ... vact (time1) float64 ... dps (time1) float64 ... tss (time2) float64 ... ``` Problem descriptionWhen reading a sub group from a netCDF file with dimensions defined in the root group, the dimensions are not read from the root group. This contradicts the netCDF documentation, which states that dimensions are scoped such that they can be seen by all sub groups. The attached netCDF file demonstrates this issue. x07z00017_cf.nc.zip Expected OutputThe dimensions from the root group should be used when reading the sub-group. ```python with xr.open_dataset(ncfile_cf) as ds: for coord in ds.coords: ds_data_cf.coords[coord] = ds[coord] print(ds_data_cf) <xarray.Dataset> Dimensions: (time1: 100000, time2: 2) Coordinates: * time1 (time1) float64 0.0 1e-06 2e-06 3e-06 4e-06 5e-06 6e-06 7e-06 ... * time2 (time2) float64 0.0 0.1 Data variables: aps (time1) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 ... iact (time1) float64 -0.00125 -0.000625 -0.00125 -0.0009375 ... vact (time1) float64 -0.009375 -0.009375 -0.01875 -0.01875 -0.009375 ... dps (time1) float64 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... tss (time2) float64 0.0 0.0 ``` Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
322849322 | MDU6SXNzdWUzMjI4NDkzMjI= | 2129 | Using `DataArray.where()` with a DataArray as the condition drops the name | jacklovell 4849151 | closed | 0 | 4 | 2018-05-14T14:46:18Z | 2020-04-14T08:54:33Z | 2020-04-14T08:54:33Z | NONE | Code Sample, a copy-pastable example if possibleCreate a boolean DataArray to use as a mask for another DataArray, with the same coordinates:
Problem descriptionWhen using a DataArray ( Expected OutputThe name should be retained.
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
303809308 | MDU6SXNzdWUzMDM4MDkzMDg= | 1977 | Netcdf char array not being decoded to string in compound dtype | jacklovell 4849151 | open | 0 | 3 | 2018-03-09T11:23:04Z | 2020-02-14T13:26:11Z | NONE | Code Sample, a copy-pastable example if possible
Problem descriptionWhen opening the attached dataset ,the char arrays in the compound dtype are not being converted into strings, despite Expected OutputThe char arrays should be converted into strings (or at the very least, bytes if an encoding is not present): ```python import netCDF4 as nc longest_str = np.max([ds.slits.values[field].shape[-1] for field in ds.slits.values.dtype.fields if ds.slits.values[field].dtype.kind in ('S', 'U')]) str_dtype = '<U{}'.format(longest_str) cartesian_coord = np.dtype([('x', np.float64), ('y', np.float64), ('z', np.float64)]) aperture_dtype_str = np.dtype([('Object_type', str_dtype), ('ID', str_dtype), ('Version', np.int32), ('basis_1', cartesian_coord), ('basis_2', cartesian_coord), ('centre_point', cartesian_coord), ('width', np.float64), ('height', np.float64), ('slit_id', str_dtype), ('slit_no', np.int32)]) ds['slits_str'] = xr.DataArray(np.empty(ds.slits.size, aperture_dtype_str), coords=[('slit_no', ds.coords['slit_no'])]) for key in ds.slits.values.dtype.fields: if key in ('Object_type', 'ID', 'slit_id'): string_key = nc.chartostring(ds.slits.values[key]) ds.slits_str.values[key] = string_key else: ds.slits_str.values[key] = ds.slits.values[key] print(ds.slits_str) <xarray.DataArray 'slits_str' (slit_no: 4)> array([ ('BolometerSlit', 'MAST-U SXD - Outer Slit 1', 1, (-0.06458486, 0.21803484, -0.97380162), ( 0.95881973, 0.28401534, 0.), (-0.52069675, 1.77104629, -1.564 ), 0.005, 0.005, 'MAST-U SXD - Outer Slit 1', 0), ('BolometerSlit', 'MAST-U SXD - Outer Slit 2', 1, (-0.16038567, 0.54145294, -0.82529095), ( 0.95881973, 0.28401534, 0.), (-0.5278879 , 1.76891617, -1.564 ), 0.005, 0.005, 'MAST-U SXD - Outer Slit 2', 1), ('BolometerSlit', 'MAST-U SXD - Upper Slit 3', 1, (-0.26470454, 0.89362754, -0.36243804), ( 0.95881973, 0.28401534, 0.), (-0.31231469, 1.06756025, -1.57072314), 0.005, 0.005, 'MAST-U SXD - Upper Slit 3', 2), ('BolometerSlit', 'MAST-U SXD - Upper Slit 4', 1, (-0.19640032, 0.66303636, 0.72236396), ( 0.95881973, 0.28401534, 0.), (-0.31950584, 1.06543013, -1.57072314), 0.005, 0.005, 'MAST-U SXD - Upper Slit 4', 3)], dtype=[('Object_type', '<U30'), ('ID', '<U30'), ('Version', '<i4'), ('basis_1', [('x', '<f8'), ('y', '<f8'), ('z', '<f8')]), ('basis_2', [('x', '<f8'), ('y', '<f8'), ('z', '<f8')]), ('centre_point', [('x', '<f8'), ('y', '<f8'), ('z', '<f8')]), ('width', '<f8'), ('height', '<f8'), ('slit_id', '<U30'), ('slit_no', '<i4')]) Coordinates: * slit_no (slit_no) int64 0 1 2 3 ``` Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
201617371 | MDU6SXNzdWUyMDE2MTczNzE= | 1217 | Using where() in datasets with dataarrays with different dimensions results in huge RAM consumption | jacklovell 4849151 | closed | 0 | 6 | 2017-01-18T16:09:50Z | 2019-02-23T07:47:01Z | 2019-02-23T07:47:01Z | NONE | I have a dataset containing groups of data with different dimensions. e.g.:
If I do something like ds.where(ds.data1 < 0.1), Python ends up allocating huge amounts of memory (>30GB for a dataset of <1MB) and seems to loop indefinitely, until the call is interrupted with CTRL-C. To use where() successfully, I have to use a subset of the dataset with only one dimension. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);