issue_comments
12 rows where issue = 1333650265 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- `sel` behaving randomly when applying to a dataset with multiprocessing · 12 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1210976795 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210976795 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85ILgob | shoyer 1217238 | 2022-08-10T16:43:36Z | 2022-08-10T16:43:36Z | MEMBER | You might look into different multiprocessing modes: https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods It may also be that the NetCDF or HDF5 libraries were simply not written in a way that can support multi-processing. This would not surprise me.
I agree, maybe this isn't worth the trouble. I have not seen it done successfully before. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210383450 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210383450 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IJPxa | guidocioni 12760310 | 2022-08-10T09:07:00Z | 2022-08-10T09:07:00Z | NONE | This is a minimal working example that I could come up with. You can try to open any netcdf that you have. I tested on a small one and it didn't reproduce the error, so it is definitely only happening with large datasets when the arrays are not loaded into memory. Unfortunately, as you need a large file, I cannot really attach it here. ```python import xarray as xr from tqdm.contrib.concurrent import process_map import pprint def main(): global ds ds = xr.open_dataset('input.nc') it = range(0, 5) results = [] for i in it: results.append(compute(i)) print("------------Serial results-----------------") pprint.pprint(results) results = process_map(compute, it, max_workers=6, chunksize=1, disable=True) print("------------Parallel results-----------------") pprint.pprint(results) def compute(station): ds_point = ds.isel(lat=0, lon=0) return station, ds_point.t_2m_max.mean().item(), ds_point.t_2m_min.mean().item(), ds_point.lon.min().item(), ds_point.lat.min().item() if name == "main": main() ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210349031 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210349031 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IJHXn | guidocioni 12760310 | 2022-08-10T08:38:31Z | 2022-08-10T08:38:31Z | NONE |
Ok, it seems to fail also with exact lookups o.O This is extremely weird I'm using
Result for the serial version
And for the parallel version with EXACTLY the same code
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210341456 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210341456 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IJFhQ | guidocioni 12760310 | 2022-08-10T08:32:13Z | 2022-08-10T08:32:13Z | NONE |
That causes an error
Here is the complete tracebabk
I think we may be heading the right direction |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210285626 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210285626 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85II346 | guidocioni 12760310 | 2022-08-10T07:41:20Z | 2022-08-10T07:41:20Z | NONE |
mmm ok I'll try and let you know. BTW is there any advantage or difference in terms of cpu and memory consumption in opening the file only one or let it open by every process? I'm asking because I thought opening in every process was just plain stupid but it seems to perform exactly the same, so maybe I'm just creating a problem where there is none |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210255676 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210255676 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IIwk8 | shoyer 1217238 | 2022-08-10T07:10:41Z | 2022-08-10T07:10:41Z | MEMBER |
Yes it should, as long as you're using multi-processing under the covers. If you do multi-threading, then you would want to use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210238864 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210238864 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IIseQ | guidocioni 12760310 | 2022-08-10T06:51:18Z | 2022-08-10T06:51:18Z | NONE |
ok that's a good shot.
Will that work in the same way if I still use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210233503 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210233503 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IIrKf | shoyer 1217238 | 2022-08-10T06:45:06Z | 2022-08-10T06:45:06Z | MEMBER | Can you try explicitly passing in a multiprocessing lock into the (We automatically select appropriate locks if using Dask, but I'm not sure how we would do that more generally...) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210220238 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210220238 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IIn7O | guidocioni 12760310 | 2022-08-10T06:30:06Z | 2022-08-10T06:30:06Z | NONE |
I haven't tried yet because it doesn't really match my use case. One idea that I had was to provide the list of points before starting the loop, creating an iterator with the slices from the xarray and then pass this to the loop. But I would end up using more data than necessary because I don't process all cases. another thing that I've noticed is that if the list of iterators is smaller than the chunksize everything's good, probably because it reverts to the serial case as only 1 worker is processing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210216148 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210216148 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IIm7U | max-sixty 5635139 | 2022-08-10T06:24:54Z | 2022-08-10T06:24:54Z | MEMBER | Re nearest, does it replicate with exact lookups? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1210174583 | https://github.com/pydata/xarray/issues/6904#issuecomment-1210174583 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IIcx3 | guidocioni 12760310 | 2022-08-10T05:23:13Z | 2022-08-10T05:24:24Z | NONE |
Yep, and yep (believe me, I've tried anything in desperation 😄)
Which method should I use then? I need the closest point
Yep I can try to make a minimal example, however, in order to reproduce the issue, I think it's necessary to open a large dataset. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 | |
1209921400 | https://github.com/pydata/xarray/issues/6904#issuecomment-1209921400 | https://api.github.com/repos/pydata/xarray/issues/6904 | IC_kwDOAMm_X85IHe94 | max-sixty 5635139 | 2022-08-09T21:39:21Z | 2022-08-09T21:39:21Z | MEMBER | That sounds quite unfriendly! A couple of questions to reduce the size of the example, without providing any answers yet unfortunately:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`sel` behaving randomly when applying to a dataset with multiprocessing 1333650265 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3