issues
4 rows where user = 18679148 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
949649935 | MDU6SXNzdWU5NDk2NDk5MzU= | 5625 | Add 'construct' method to Coarsen objects | ACHMartin 18679148 | closed | 0 | 1 | 2021-07-21T12:20:39Z | 2021-07-21T14:25:11Z | 2021-07-21T14:25:11Z | NONE | Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Similarly to the 'construct' method for Rolling objects, I think it will make sense to have the same for Coarsen objects. My use will be for my da: da(lon, lat, time) of dimension (2000, 1600, 240) I wish to coarsen the data by a factor 4 in lon and lat (Lon and Lat), but wish to keep the data in a new dimension, it would be: da(Lon, Lat, time, samples) of dim (500, 400, 240, 16) my aim is to do da.median(dim=['time','samples']) Describe the solution you'd like A clear and concise description of what you want to happen. da.coarsen(lon=4, lat=4, , boundary="trim").construct('samples') Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. I am using Rolling objects, but it increases the size of the matrix. Many thanks for your work |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
700915153 | MDU6SXNzdWU3MDA5MTUxNTM= | 4421 | circular + linear stats on a same dataset | ACHMartin 18679148 | open | 0 | 1 | 2020-09-14T09:03:24Z | 2020-11-01T13:44:26Z | NONE | Is your feature request related to a problem? Please describe.
I would like to calculate the mean on all variables of my dataset which is easy and straightforward with xarray, for example:
Describe the solution you'd like I would like to add a parameter in the description of my variable mentioning it is circular. Every time I apply a function on the full dataset, it uses the circular equivalent if it exists. There is perhaps some tricks to do it, but I didn't find any so far. Thank you. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4421/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
663649344 | MDU6SXNzdWU2NjM2NDkzNDQ= | 4246 | combine_by_coords; proposition for a new option for combine_attrs = 'dim' | ACHMartin 18679148 | closed | 0 | 2 | 2020-07-22T10:22:32Z | 2020-07-23T18:46:51Z | 2020-07-23T18:46:51Z | NONE | I am combining a list of snapshots having all the same geometry but with different time. Some time information appears in the attributes. I can 'drop' it, but I would prefer keep it and add it using a define dimension (in my case, in time). I believe for v0.15.1 (the default was to drop it with the default compat='no_conflicts'), I think of this because I got an error on the default combine_attrs = 'no_conflicts' on v0.16.0. I would like an option of type combine_attrs = {dim: 'time'} or even better only combine_attrs = 'dim' and somehow it finds which dimension it should use. Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
663769801 | MDU6SXNzdWU2NjM3Njk4MDE= | 4247 | plot/utils get_axis cannot use subplot_kws with existing ax | ACHMartin 18679148 | closed | 0 | 2 | 2020-07-22T13:38:04Z | 2020-07-22T14:59:16Z | 2020-07-22T14:59:16Z | NONE | I get the following error when I move from xarray 0.15.1 to 0.16.0 "cannot use subplot_kws with existing ax". I don't really understand why this error is raised, but as far as I use it (see below), I feel I need to use both ax and subplot_kws. Please find below a minimum case: ```python import numpy as np import pandas as pd import xarray as xr import matplotlib.pyplot as plt import cartopy.crs as ccrs x = np.arange(0,3) y = np.arange(0,5) z = np.arange(0,2) xx, yy, zz = np.meshgrid(x, y, z) lat = 40 + xx0.1 + yy1 lon = 5 + xx1 + yy0.1 + zz5 data3 = xx + yy + 5zz data4 = np.stack([data3, data32, data34, data3*8]) ds = xr.Dataset({'data': (['time','x','y','z'], data4)}, coords={'time': pd.date_range("2020-01-01", periods=4), 'lon': (['x','y','z'], lon), 'lat': (['x','y','z'], lat)} ) ``` So I have lon/lat maps with two swaths (z-dimension). I want to plot both swaths on the same map, and as function of time ```python plt.figure() for ss in z: ds.data.isel(time=0, z=ss).plot(x='lon', y='lat', add_colorbar=False, vmin=0, vmax=12) plt.figure() g = ds.data.isel(z=0).plot(x='lon', y='lat', col='time') for cc, ax in enumerate(g.axes.flat): for ss in z: ds.data.isel(time=cc, z=ss).plot(x='lon', y='lat', add_colorbar=False, vmin=0, vmax=50, ax=ax) ``` All is relatively fine, the issue is when I want to project it on a map using Cartopy:
Still fine with only one map:
but the "ValueError: cannot use subplot_kws with existing ax" is raised when I try with the facets plot:
On a side note, I don't understand why I am loosing one pixel per row and column using the cartopy projection. Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 (default, Jan 8 2020, 13:42:34) [Clang 4.0.1 (tags/RELEASE_401/final)] python-bits: 64 OS: Darwin OS-release: 19.5.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.16.0 pandas: 1.0.1 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.10.1 distributed: 2.10.0 matplotlib: 3.1.3 cartopy: 0.17.0 seaborn: None numbagg: None pint: None setuptools: 45.2.0.post20200210 pip: 20.0.2 conda: None pytest: None IPython: 7.12.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);