issues
7 rows where user = 6153603 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
374460958 | MDU6SXNzdWUzNzQ0NjA5NTg= | 2517 | Treat accessor dataarrays as members of parent dataset | czr137 6153603 | closed | 0 | 5 | 2018-10-26T16:37:45Z | 2018-11-05T22:40:46Z | 2018-11-05T22:40:46Z | CONTRIBUTOR | Code Sample```python import xarray as xr import pandas as pd What I'm doing with comparison, I'd like to do with actualcomparison = xr.Dataset({'data': (['time'], [100, 30, 10, 3, 1]), 'altitude': (['time'], [5, 10, 15, 20, 25])}, coords={'time': pd.date_range('2014-09-06', periods=5, freq='1s')}) With altitude as a data var, I can do the following:comparison.swap_dims({'time': 'altitude'}).interp(altitude=12.0).data Andfor (time, g) in comparison.groupby('time'): print(time) print(g.altitude.values) @xr.register_dataset_accessor('acc') class Accessor(object): def init(self, xarray_ds): self._ds = xarray_ds self._altitude = None
actual = xr.Dataset({'data': (['time'], [100, 30, 10, 3, 1])}, coords={'time': pd.date_range('2014-09-06', periods=5, freq='1s')}) This doesn't work:actual.swap_dims({'time': 'altitude'}).interp(altitude=12.0).data Neither does this:for (time, g) in actual.groupby('time'): print(time) print(g.acc.altitude.values) ``` Problem descriptionI've been using accessors to extend xarray with some custom computation. The altitude in the above dataset is not used every time the data is loaded, but when it is, it is an expensive computation to make (which is why I put it in as an accessor; if it isn't needed, it isn't computed). Problem is, once it has been computed, I'd like to be able to use it as if it is a regular data_var of the dataset. For example, to interp on the newly computed column, or use it in a groupby. Please advise if I'm going about this in the wrong way and how I should think about this problem instead. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
374473176 | MDU6SXNzdWUzNzQ0NzMxNzY= | 2518 | Allow reduce to return an additional dimension | czr137 6153603 | closed | 0 | 2 | 2018-10-26T17:15:29Z | 2018-10-27T14:14:39Z | 2018-10-27T14:14:39Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python import xarray as xr from scipy.interpolate import interp1d airtemps = xr.tutorial.load_dataset('air_temperature') def interptotarget(y, axis, **kwargs): x = kwargs['x'] target = kwargs['target'] return interp1d(x, y)(target) This worksairtemps.groupby('lat').reduce(interptotarget, dim='lon', x=airtemps.lon, target=213.5) This doesn't, but I'd like it to:airtemps.groupby('lat').reduce(interptotarget, dim='lon', x=airtemps.lon, target=[213.5, 213.6]) ``` Problem descriptionIn the code above, I give an example of how I'd like to use a reduce to return an additional dimension that I will need to be defined. The scipy call to interp1d has no trouble calculating the data, but xarray issues I used the above example as a generic case. I know I could use .interp to do an interpolation, but I have different reduction functions in mind that produce an additional dimension. The scipy interp1 function just serves as a working example. Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
340757861 | MDU6SXNzdWUzNDA3NTc4NjE= | 2284 | interp over time coordinate | czr137 6153603 | closed | 0 | 2 | 2018-07-12T18:54:45Z | 2018-07-29T06:09:41Z | 2018-07-29T06:09:41Z | CONTRIBUTOR | Before I start, I'm very excited about the interp addition in 0.10.7. Great addition and thanks to @fujiisoup and @shoyer. I see there was a bit of a discussion in the interp pull request, #2104, about interpolating over times and that it was suggested to wait for use cases. I can think of an immediate use case in my line of work. I frequently use regular gridded geophysical data (time, lat, lon), not unlike the sample tutorial air_temperature data, and the data must be interpolated to line up with corresponding satellite measurements that are irregularly spaced in lat, lon and time. Being able to interpolate in one quick step would be fantastic. For example:
Problem descriptionCurrently issues Desired Output
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2284/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
292653302 | MDExOlB1bGxSZXF1ZXN0MTY1ODg2Mzky | 1869 | Add '_FillValue' to set of valid_encodings for netCDF4 backend | czr137 6153603 | closed | 0 | 9 | 2018-01-30T04:57:24Z | 2018-02-13T18:34:44Z | 2018-02-13T18:34:37Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1869 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
292585285 | MDU6SXNzdWUyOTI1ODUyODU= | 1865 | Dimension Co-ordinates incorectly saving _FillValue attribute | czr137 6153603 | closed | 0 | 6 | 2018-01-29T22:31:43Z | 2018-02-13T18:34:36Z | 2018-02-13T18:34:36Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possible```python import xarray as xr import pandas as pd import numpy as np temp = 273.15 + 25 * np.random.randn(2, 2) lon = [0.0, 5.0] lat = [10.0, 20.0] ds = xr.Dataset({'temperature': (['lat', 'lon'], temp)}, coords={'lat': lat, 'lon': lon}) ds['lat'].attrs = { 'standard_name': 'latitude', 'long_name': 'latitude', 'units': 'degrees_north', 'axis': 'Y'} ds['lon'].attrs = { 'standard_name': 'longitude', 'long_name': 'longitude', 'units': 'degrees_east', 'axis': 'X'} ds['temperature'].attrs = { 'standard_name': 'air_temperature', 'units': 'K'} ds.attrs = { ('title', 'non-conforming CF 1.6 data produced by xarray 0.10'), ('Conventions', 'CF-1.6')} ds.to_netcdf('/tmp/test.nc') ``` Problem descriptionAccording to the last sentence of the first paragraph of 2.5.1. Missing data, valid and actual range of data in NetCDF Climate and Forecast (CF) Metadata Conventions 1.7:
When I use the conformance checker it issues an INFO message to this point for the co-ordinate variables. Output of CF-Checker follows...
Checking variable: temperatureChecking variable: latINFO: attribute _FillValue is being used in a non-standard way Checking variable: lonINFO: attribute _FillValue is being used in a non-standard way ERRORS detected: 0 WARNINGS given: 0 INFORMATION messages: 2 ``` Expected OutputCo-ordinate variables should not store a _FillValue attribute Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
216833414 | MDU6SXNzdWUyMTY4MzM0MTQ= | 1327 | Add 'count' as option for how in dataset resample | czr137 6153603 | closed | 0 | 2 | 2017-03-24T16:11:25Z | 2018-02-13T18:05:57Z | 2018-02-13T18:05:57Z | CONTRIBUTOR | All of the usual aggregations are included in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
216836795 | MDExOlB1bGxSZXF1ZXN0MTEyNDkzMzQ4 | 1328 | Add how=count option to resample | czr137 6153603 | closed | 0 | 3 | 2017-03-24T16:23:32Z | 2017-09-01T15:57:36Z | 2017-09-01T15:57:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1328 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);