issue_comments
9 rows where author_association = "NONE" and issue = 91184107 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- segmentation fault with `open_mfdataset` · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
119698728 | https://github.com/pydata/xarray/issues/444#issuecomment-119698728 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExOTY5ODcyOA== | razcore-rad 1177508 | 2015-07-08T19:07:41Z | 2015-07-08T19:07:41Z | NONE | I think this issue can be closed, after some digging and playing with different |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
118436430 | https://github.com/pydata/xarray/issues/444#issuecomment-118436430 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExODQzNjQzMA== | andrewcollette 3101370 | 2015-07-03T23:02:52Z | 2015-07-03T23:02:52Z | NONE | @shoyer, there are basically two levels of thread safety for HDF5/h5py. First, the HDF5 library has an optional compile-time "threadsafe" build option that wraps all API access in a lock. This is all-or-nothing; I'm not aware of any per-file effects. Second, h5py uses its own global lock on the Python side to serialize access, which is only disabled in MPI mode. For added protection, h5py also does not presently release the GIL around reads/writes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
118373477 | https://github.com/pydata/xarray/issues/444#issuecomment-118373477 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExODM3MzQ3Nw== | razcore-rad 1177508 | 2015-07-03T15:28:16Z | 2015-07-03T15:28:16Z | NONE | Per file basis ( |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
118091969 | https://github.com/pydata/xarray/issues/444#issuecomment-118091969 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExODA5MTk2OQ== | razcore-rad 1177508 | 2015-07-02T16:55:02Z | 2015-07-02T16:55:02Z | NONE | Yes, I'm using the same files that I once uploaded on Dropbox for you to play with for #443. I'm not doing anything special, just passing in the glob pattern to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
117993960 | https://github.com/pydata/xarray/issues/444#issuecomment-117993960 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExNzk5Mzk2MA== | razcore-rad 1177508 | 2015-07-02T10:36:06Z | 2015-07-02T12:18:09Z | NONE | OK... as a follow-up, I did some tests and with
This is simple to solve.. just have every edit: boy... there are some differences between these packages (
I didn't put the full error cause I don't think it's relevant. Anyway, needless to say... edit2: so I was going through the posts here and now I saw you addressed this issue using that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
117217039 | https://github.com/pydata/xarray/issues/444#issuecomment-117217039 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExNzIxNzAzOQ== | razcore-rad 1177508 | 2015-06-30T14:55:58Z | 2015-06-30T14:55:58Z | NONE | Well... I have a couple of remarks to make. After some more thought about this it might have been all along my fault. Let me explain. I have this machine at work where I don't have administrative privileges so I decided to give |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
116146897 | https://github.com/pydata/xarray/issues/444#issuecomment-116146897 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExNjE0Njg5Nw== | razcore-rad 1177508 | 2015-06-27T21:33:30Z | 2015-06-27T21:33:30Z | NONE | So I just tried @mrocklin's idea with using single-threaded stuff. This seems to fix the segmentation fault, but I am very curious as to why there's a problem with working in parallel. I tried two different hdf5 libraries (I think version 1.8.13 and 1.8.14) but I got the same segmentation fault. Anyway, working on a single thread is not a big deal, I'll just do that for the time being... I already tried @shoyer, the files are not the issue here, they're the same ones I provided in #443. Question: does the hdf5 library need to be built with parallel support (mpi or something) maybe?... thanks guys |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
115906191 | https://github.com/pydata/xarray/issues/444#issuecomment-115906191 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExNTkwNjE5MQ== | razcore-rad 1177508 | 2015-06-26T22:10:46Z | 2015-06-26T22:22:11Z | NONE | Just tried edit: I was right... it's actually the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 | |
115900337 | https://github.com/pydata/xarray/issues/444#issuecomment-115900337 | https://api.github.com/repos/pydata/xarray/issues/444 | MDEyOklzc3VlQ29tbWVudDExNTkwMDMzNw== | razcore-rad 1177508 | 2015-06-26T21:50:01Z | 2015-06-26T21:53:50Z | NONE | Unfortunately I can't use ``` print(arr1.dtype, arr2.dtype) print((arr1 == arr2)) print((arr1 == arr2) | (isnull(arr1) & isnull(arr2))) gives:float64 float64 dask.array<x_1, shape=(50, 39, 59), chunks=((50,), (39,), (59,)), dtype=bool> dask.array<x_6, shape=(50, 39, 59), chunks=((50,), (39,), (59,)), dtype=bool> ``` Funny thing is when I'm adding these print statements and so on I get some traceback from Python (some times). Without them I would only get segmetation fault with no additional information. For example, just now, after introducing these edit: oh yeah... this is a funny thing. If I do |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
segmentation fault with `open_mfdataset` 91184107 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2