issues
3 rows where repo = 13221727, state = "open" and user = 12760310 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1785599886 | I_kwDOAMm_X85qbheO | 7957 | `FacetGrid` plot overlaying multiple variables from same dataset? | guidocioni 12760310 | open | 0 | 1 | 2023-07-03T08:15:42Z | 2024-01-01T13:50:52Z | NONE | What is your issue?I'm trying to produce a facet plot which contains maps with different overlaid layers (e.g. a ```python p = dss['LH'].plot.pcolormesh( x='lon', y='lat', col="exp", ) for i, ax in enumerate(p.axes.flat): ax.coastlines() ax.streamplot( dss.isel(exp=i).lon.values, dss.isel(exp=i).lat.values, dss.isel(exp=i)['u_10m_gr'].values, dss.isel(exp=i)['v_10m_gr'].values, ) ``` This is far from optimal and doesn't really look clean to me. Also, I'm not entirely sure the order of All examples in the doc (https://docs.xarray.dev/en/stable/user-guide/plotting.html) refer to the However, on the doc it is mentioned
It is not clear to me whether the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1333650265 | I_kwDOAMm_X85PfeNZ | 6904 | `sel` behaving randomly when applying to a dataset with multiprocessing | guidocioni 12760310 | open | 0 | 12 | 2022-08-09T18:43:06Z | 2022-08-10T16:48:53Z | NONE | What happened?I have a script structured like this ```python def main(): global ds ds = xr.open_dataset(file) for point in points: compute(point) def compute(point): ds_point = ds.sel(lat=point['latitude'], lon=point['longitude'], method='nearest') print(ds_point.var.mean()) # do something with ds_point and other data... if name == "main": main() ``` This works as expected. However, if I try to parallelize
The results of the print are completely different from the serial example and they change every time that I run the script. it seems that the If I move the Is this supposed to happen? I really don't understand how. What did you expect to happen?The behaviour of Minimal Complete Verifiable ExampleNo response MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10)
[GCC 10.3.0]
python-bits: 64
OS: Linux
OS-release: 3.10.0-229.1.2.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.utf8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.10.6
libnetcdf: 4.7.4
xarray: 2022.3.0
pandas: 1.2.3
numpy: 1.20.3
scipy: 1.8.1
netCDF4: 1.5.6
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.6.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: 1.2.1
cfgrib: None
iris: None
bottleneck: None
dask: 2022.7.1
distributed: 2022.7.1
matplotlib: 3.5.2
cartopy: 0.18.0
seaborn: 0.11.2
numbagg: None
fsspec: 2022.5.0
cupy: None
pint: 0.19.2
sparse: None
setuptools: 59.8.0
pip: 22.2
conda: 4.13.0
pytest: None
IPython: 8.4.0
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
932444037 | MDU6SXNzdWU5MzI0NDQwMzc= | 5549 | Time is not correctly saved to disk netcdf | guidocioni 12760310 | open | 0 | 0 | 2021-06-29T10:00:36Z | 2021-06-29T10:00:36Z | NONE | What happened: When trying to write a dataset to netcdf file using the netcdf4 engine time is not saved correctly. What you expected to happen: Time to be saved correctly as in the original dataset. Minimal Complete Verifiable Example:
is giving me the warning
xarray Dataset saved on disk (notice time values) I cannot see anything special in the time array...is there a limitation because of the compression? Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-229.1.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.utf8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.3 numpy: 1.20.1 scipy: 1.6.3 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.2 cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.3.4 cartopy: 0.18.0 seaborn: 0.11.1 numbagg: None pint: 0.17 setuptools: 49.6.0.post20210108 pip: 21.1.1 conda: 4.10.2 pytest: None IPython: 7.21.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);