issues
4 rows where type = "issue" and user = 31640292 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
859065068 | MDU6SXNzdWU4NTkwNjUwNjg= | 5164 | Xarray unable to read file that netCDF4 can | WardBrian 31640292 | open | 0 | 5 | 2021-04-15T16:45:24Z | 2023-09-16T15:59:34Z | CONTRIBUTOR | What happened: I am reading files from https://www-air.larc.nasa.gov/pub/NDACC/PUBLIC/stations/mauna.loa.hi/hdf/lidar/. When passed to ```pythonRuntimeError Traceback (most recent call last) <ipython-input-36-895975874f7f> in <module> ----> 1 xr.open_dataset( 2 "/users/bmward/groundbased_lidar.temperature_nasa.jpl002_glass.1.1_mauna.loa.hi_20200103t050130z_20200103t072420z_001.h4", 3 engine="netcdf4", 4 ) ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs, use_cftime, decode_timedelta) 555 556 with close_on_error(store): --> 557 ds = maybe_decode_store(store, chunks) 558 559 # Ensure source filename always stored in dataset object (GH issue #2550) ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/backends/api.py in maybe_decode_store(store, chunks) 451 452 def maybe_decode_store(store, chunks): --> 453 ds = conventions.decode_cf( 454 store, 455 mask_and_scale=mask_and_scale, ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/conventions.py in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables, use_cftime, decode_timedelta) 637 encoding = obj.encoding 638 elif isinstance(obj, AbstractDataStore): --> 639 vars, attrs = obj.load() 640 extra_coords = set() 641 close = obj.close ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/backends/common.py in load(self) 111 """ 112 variables = FrozenDict( --> 113 (_decode_variable_name(k), v) for k, v in self.get_variables().items() 114 ) 115 attributes = FrozenDict(self.get_attrs()) ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/backends/netCDF4_.py in get_variables(self) 417 418 def get_variables(self): --> 419 dsvars = FrozenDict( 420 (k, self.open_store_variable(k, v)) for k, v in self.ds.variables.items() 421 ) ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/core/utils.py in FrozenDict(args, kwargs) 451 452 def FrozenDict(args, kwargs) -> Frozen: --> 453 return Frozen(dict(*args, kwargs)) 454 455 ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/backends/netCDF4_.py in <genexpr>(.0) 418 def get_variables(self): 419 dsvars = FrozenDict( --> 420 (k, self.open_store_variable(k, v)) for k, v in self.ds.variables.items() 421 ) 422 return dsvars ~/.conda/envs/bg-dev/lib/python3.9/site-packages/xarray/backends/netCDF4_.py in open_store_variable(self, name, var) 394 # netCDF4 specific encoding; save _FillValue for later 395 encoding = {} --> 396 filters = var.filters() 397 if filters is not None: 398 encoding.update(filters) src/netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.filters() src/netCDF4/_netCDF4.pyx in netCDF4._netCDF4._ensure_nc_success() RuntimeError: NetCDF: Attempting netcdf-4 operation on netcdf-3 file ``` However,
What you expected to happen: I expect that xarray be able to load the file Minimal Complete Verifiable Example:
Anything else we need to know?: Changing the engine to Setting Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 | packaged by conda-forge | (default, Jan 26 2021, 01:34:10) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.11.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: None LOCALE: None.None libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.3 numpy: 1.20.1 scipy: 1.6.2 netCDF4: 1.5.6 pydap: None h5netcdf: 0.10.0 h5py: 3.1.0 Nio: None zarr: None cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.03.1 distributed: 2021.03.1 matplotlib: 3.3.4 cartopy: 0.18.0 seaborn: None numbagg: None pint: 0.17 setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: None IPython: 7.22.0 sphinx: 3.5.3 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
816016031 | MDU6SXNzdWU4MTYwMTYwMzE= | 4955 | set_index does not respect keep_attrs | WardBrian 31640292 | closed | 0 | 2 | 2021-02-25T02:22:05Z | 2022-03-17T17:11:41Z | 2022-03-17T17:11:41Z | CONTRIBUTOR | What happened: set_index removes attributes from a coordinate, even with What you expected to happen: The attributes to be preserved through the coordinate renaming Minimal Complete Verifiable Example:
Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 | packaged by conda-forge | (default, Sep 24 2020, 16:55:52) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.11.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en.UTF-8 LOCALE: None.None libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.2 pandas: 1.2.0 numpy: 1.19.1 scipy: 1.6.0 netCDF4: 1.5.5.1 pydap: None h5netcdf: 0.8.1 h5py: 3.1.0 Nio: None zarr: None cftime: 1.3.1 nc_time_axis: 1.2.0 PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.01.0 distributed: 2021.01.0 matplotlib: 3.3.3 cartopy: 0.18.0 seaborn: 0.11.1 numbagg: None pint: 0.16.1 setuptools: 49.6.0.post20210108 pip: 20.2.3 conda: None pytest: None IPython: 7.18.1 sphinx: 3.4.3 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
741200389 | MDU6SXNzdWU3NDEyMDAzODk= | 4575 | FacetGrid.set_title should allow more extensive formatting | WardBrian 31640292 | open | 0 | 2 | 2020-11-12T01:41:32Z | 2021-07-04T01:26:35Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. I've been using facetgrids fairly extensively with data that is a numpy.float32. Using set_title does not lead to properly formatted results, as this dtype does not match isinstance(x, (float, np.float_)), so it is cast directly to a string leading to labels that are often incredibly unreadable. Describe the solution you'd like Do not use the format_item in the facetgrid.set_titles function for non-datetime values, but instead allow arguments of the form "{value:0.4f}" etc Describe alternatives you've considered The alternative is that format_item should be loser on it's isinstance checks, in particular allowing other precisions of float to be formatted. Ideally, both of these could be implemented in tandem |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
915168227 | MDU6SXNzdWU5MTUxNjgyMjc= | 5453 | Notion of "distance" or "scale" for indexes and selection | WardBrian 31640292 | open | 0 | 0 | 2021-06-08T15:19:52Z | 2021-06-08T15:27:58Z | CONTRIBUTOR | Is your feature request related to a problem? Please describe. I've been using xarray with atmospheric data given on pressure levels. This data is best thought of in log(pressure) for computation, but it is stored and displayed as standard numbers. I would love if there was some way to have E.g, currently ```python
In general, one can imagine situations where the opposite is true (storing data in log-space for numerical accuracy, but wanting a concept of 'nearest' which is the standard linear sense), or a desire for arbitrary scaling. Describe the solution you'd like
The simplest solution I can imagine is to provide a preprocessor argument to the sel function which operates over numpy values and is used before the call to e.g. ```python
I believe this can be implemented by wrapping both index and label_value here with a call to the preprocess function (assuming the argument is only desired alongside the 'method' kwarg): https://github.com/pydata/xarray/blob/9daf9b13648c9a02bddee3640b80fe95ea1fff61/xarray/core/indexes.py#L224-L226 Describe alternatives you've considered
I'm not sure how this would relate to the ongoing work on #1603, but one solution is to include a concept of the underlying number line within the index api. The end result is similar to the proposed implementation, but it would be stored with the index rather than passed to the One version of this could also be used to set reasonable defaults when plotting, e.g. if a coordinate has a log numberline then it could set the x/yscale to 'log' by default when plotting over that coordinate. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);