home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where repo = 13221727, state = "open" and user = 12760310 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

type 1

  • issue 3

state 1

  • open · 3 ✖

repo 1

  • xarray · 3 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1785599886 I_kwDOAMm_X85qbheO 7957 `FacetGrid` plot overlaying multiple variables from same dataset? guidocioni 12760310 open 0     1 2023-07-03T08:15:42Z 2024-01-01T13:50:52Z   NONE      

What is your issue?

I'm trying to produce a facet plot which contains maps with different overlaid layers (e.g. a pcolormesh and streamplot). At the moment I'm creating the plot and then iterating over the axes to add the plots manuallay

```python p = dss['LH'].plot.pcolormesh( x='lon', y='lat', col="exp", )

for i, ax in enumerate(p.axes.flat): ax.coastlines() ax.streamplot( dss.isel(exp=i).lon.values, dss.isel(exp=i).lat.values, dss.isel(exp=i)['u_10m_gr'].values, dss.isel(exp=i)['v_10m_gr'].values, ) ```

This is far from optimal and doesn't really look clean to me. Also, I'm not entirely sure the order of p.axes.flat correspond to the one of the exp dimension I'm using to facet.

All examples in the doc (https://docs.xarray.dev/en/stable/user-guide/plotting.html) refer to the plot method of DataArray, so it seems that, once created the p object, no other variable from the dataset can be accessed.

However, on the doc it is mentioned

TODO: add an example of using the map method to plot dataset variables (e.g., with plt.quiver).

It is not clear to me whether the xarray.plot.FacetGrid.map method can indeed be used to plot another dataset variable or not. If that's not the case, is there any way to achieve what I'm doing without manually looping through the axes?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7957/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1333650265 I_kwDOAMm_X85PfeNZ 6904 `sel` behaving randomly when applying to a dataset with multiprocessing guidocioni 12760310 open 0     12 2022-08-09T18:43:06Z 2022-08-10T16:48:53Z   NONE      

What happened?

I have a script structured like this

```python def main(): global ds ds = xr.open_dataset(file) for point in points: compute(point)

def compute(point): ds_point = ds.sel(lat=point['latitude'], lon=point['longitude'], method='nearest') print(ds_point.var.mean()) # do something with ds_point and other data...

if name == "main": main() ```

This works as expected. However, if I try to parallelize compute by calling it with

python process_map(compute, points, max_workers=5, chunksize=1)

The results of the print are completely different from the serial example and they change every time that I run the script. it seems that the sel is giving back a different part of the dataset when there are multiple processes running in parallel.

If I move the open_dataset statement inside compute then everything works also in the parallel case in the same way as in the serial one. Also, if I load the dataset at the beginning, i.e. ds = xr.open_dataset(file).load(), I also have reproducible results.

Is this supposed to happen? I really don't understand how.

What did you expect to happen?

The behaviour of sel should be the same in parallel or serial execution.

Minimal Complete Verifiable Example

No response

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-229.1.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.utf8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 2022.3.0 pandas: 1.2.3 numpy: 1.20.3 scipy: 1.8.1 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.6.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.1 cfgrib: None iris: None bottleneck: None dask: 2022.7.1 distributed: 2022.7.1 matplotlib: 3.5.2 cartopy: 0.18.0 seaborn: 0.11.2 numbagg: None fsspec: 2022.5.0 cupy: None pint: 0.19.2 sparse: None setuptools: 59.8.0 pip: 22.2 conda: 4.13.0 pytest: None IPython: 8.4.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6904/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
932444037 MDU6SXNzdWU5MzI0NDQwMzc= 5549 Time is not correctly saved to disk netcdf guidocioni 12760310 open 0     0 2021-06-29T10:00:36Z 2021-06-29T10:00:36Z   NONE      

What happened: When trying to write a dataset to netcdf file using the netcdf4 engine time is not saved correctly.

What you expected to happen: Time to be saved correctly as in the original dataset.

Minimal Complete Verifiable Example:

python ds.to_netcdf(filename, encoding={product_type: {'zlib': True, 'complevel': 9}}, engine='netcdf4')

is giving me the warning SerializationWarning: saving variable time with floating point data as an integer dtype without any _FillValue to use for NaNs

xarray Dataset

saved on disk (notice time values)

I cannot see anything special in the time array...is there a limitation because of the compression?

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-229.1.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.utf8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.3 numpy: 1.20.1 scipy: 1.6.3 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.2 cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.3.4 cartopy: 0.18.0 seaborn: 0.11.1 numbagg: None pint: 0.17 setuptools: 49.6.0.post20210108 pip: 21.1.1 conda: 4.10.2 pytest: None IPython: 7.21.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5549/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 3282.819ms · About: xarray-datasette