issue_comments
17 rows where author_association = "NONE" and user = 32069530 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- lanougue · 17 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1450036767 | https://github.com/pydata/xarray/issues/6196#issuecomment-1450036767 | https://api.github.com/repos/pydata/xarray/issues/6196 | IC_kwDOAMm_X85Wbc4f | lanougue 32069530 | 2023-03-01T12:09:21Z | 2023-03-01T12:09:40Z | NONE | Hello @TomNicholas , Reopening this issue 1 year later ! To answer your last question, singleton dimension seems to have, indeed, a unique behavior since they are reattached systematically to other coordinates (even if they naturally do not share any dimension with other coordinates).
These singleton dimensions introduce some strange behavior. This is another example:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrong list of coordinate when a singleton coordinate exists 1115166039 | |
1255029201 | https://github.com/pydata/xarray/issues/2805#issuecomment-1255029201 | https://api.github.com/repos/pydata/xarray/issues/2805 | IC_kwDOAMm_X85KzjnR | lanougue 32069530 | 2022-09-22T13:30:26Z | 2022-09-22T16:12:47Z | NONE | Hello guys, While waiting for a integrated solution. Here is a function that should do the job in a safe way. It returns an iterator ```` def xndindex(ds, dims=None): if dims is None: dims = ds.dims elif type(dims) is str: dims=[dims] else: pass
|
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[Feature Request] iteration equivalent numpy's nditer or ndenumerate 419543087 | |
1239645797 | https://github.com/pydata/xarray/issues/2805#issuecomment-1239645797 | https://api.github.com/repos/pydata/xarray/issues/2805 | IC_kwDOAMm_X85J435l | lanougue 32069530 | 2022-09-07T16:53:34Z | 2022-09-07T17:00:44Z | NONE | Hi guys, For now, when I want to iterate over all my dataset I use the simple (but dangerous I believe) workaround:
Is there any news on this topic ? Many thanks ! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[Feature Request] iteration equivalent numpy's nditer or ndenumerate 419543087 | |
1085742545 | https://github.com/pydata/xarray/issues/1772#issuecomment-1085742545 | https://api.github.com/repos/pydata/xarray/issues/1772 | IC_kwDOAMm_X85Atx3R | lanougue 32069530 | 2022-04-01T10:42:20Z | 2022-04-01T10:42:20Z | NONE | I wake up this issue, Any news ? |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
nonzero method for xr.DataArray 280875330 | |
1026774266 | https://github.com/pydata/xarray/issues/6196#issuecomment-1026774266 | https://api.github.com/repos/pydata/xarray/issues/6196 | IC_kwDOAMm_X849M1T6 | lanougue 32069530 | 2022-02-01T12:07:51Z | 2022-02-01T12:07:51Z | NONE | Thanks for the enlightening. Actually, this coordinates dependency with singleton dimension caused me a problem when using the to_netcdf() function. No problem playing whith the xr.Dataset but I get some error when trying to write on disk using to_netcdf(). For now, I wasn't able to reproduce a minimalist example because the error disappears with minimalist example. I wasn't able to find the fundamental difference between the dataset causing the error and the minimalist one. Printing them are exactly the same. I have to do deeper inspection. Concerning the philosophy of what a coordinate should be: For me the "label" idea is understandable at a dataset level. A singleton dimension become a (shared) "label' for the whole dataset. This is ok for me. However, I do not understand why it should also be a "label" of the other coordinates of the dataset. A singleton dimension should not be "more important" than the other (not singleton) dimensions. Why the singleton dimension should become a "label" of another dimension while the other dimensions are not. This do not seem logical to me. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrong list of coordinate when a singleton coordinate exists 1115166039 | |
1022313564 | https://github.com/pydata/xarray/issues/6183#issuecomment-1022313564 | https://api.github.com/repos/pydata/xarray/issues/6183 | IC_kwDOAMm_X84870Rc | lanougue 32069530 | 2022-01-26T15:31:43Z | 2022-01-26T15:31:43Z | NONE | ok, thanks ! I will thus be patient |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: dimension attribute are lost when stacking an xarray 1110623911 | |
861792425 | https://github.com/pydata/xarray/issues/5436#issuecomment-861792425 | https://api.github.com/repos/pydata/xarray/issues/5436 | MDEyOklzc3VlQ29tbWVudDg2MTc5MjQyNQ== | lanougue 32069530 | 2021-06-15T20:00:29Z | 2021-06-15T20:00:29Z | NONE | an additional flag like "keep_attrs" is not feasible ? It would be a boolean |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bug or unclear definition of combine_attrs with xr.merge() 911513701 | |
854812439 | https://github.com/pydata/xarray/issues/5436#issuecomment-854812439 | https://api.github.com/repos/pydata/xarray/issues/5436 | MDEyOklzc3VlQ29tbWVudDg1NDgxMjQzOQ== | lanougue 32069530 | 2021-06-04T15:24:25Z | 2021-06-04T15:24:25Z | NONE | I understand but I still beleive that we should be able to control separately the attrs of the final dataset and the attrs of the merged dataArray inside (whatever the way they are passed to the merge function) Thanks for the pint-xarray suggestion! I didn't know about it. I will look into it. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bug or unclear definition of combine_attrs with xr.merge() 911513701 | |
854768921 | https://github.com/pydata/xarray/issues/5436#issuecomment-854768921 | https://api.github.com/repos/pydata/xarray/issues/5436 | MDEyOklzc3VlQ29tbWVudDg1NDc2ODkyMQ== | lanougue 32069530 | 2021-06-04T14:27:07Z | 2021-06-04T14:27:07Z | NONE | Ok, I understand your point of view. My question (or what you think could be a bug) thus becomes: why "drop" option removes attrs from the variables in the merged dataset while "drop_conflicts" and "override" keep them ? It should thus be some way to say the merging to keep or not the attrs of each variables in the final dataset. (I do not understand your comment: how to keep the units on the data instead of in the attributes ?) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bug or unclear definition of combine_attrs with xr.merge() 911513701 | |
854739959 | https://github.com/pydata/xarray/issues/5436#issuecomment-854739959 | https://api.github.com/repos/pydata/xarray/issues/5436 | MDEyOklzc3VlQ29tbWVudDg1NDczOTk1OQ== | lanougue 32069530 | 2021-06-04T13:52:44Z | 2021-06-04T13:52:44Z | NONE | @keewis , do you think this behaviour to be the expected one ? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
bug or unclear definition of combine_attrs with xr.merge() 911513701 | |
611208139 | https://github.com/pydata/xarray/issues/3946#issuecomment-611208139 | https://api.github.com/repos/pydata/xarray/issues/3946 | MDEyOklzc3VlQ29tbWVudDYxMTIwODEzOQ== | lanougue 32069530 | 2020-04-08T21:37:45Z | 2020-04-08T21:37:45Z | NONE | @TomNicholas , Thanks for yor help. That is exactly what I wanted to do but, as you said there is probably a more efficent way to do it. @dcherian I needed this function because I sometimes use the groupby_bins() function followed by a concatenantion along a new dimension. This can drastically increase memory due to the multiplication of other variables in a Dataset. Independantly of my usage, having a function that remove redundant data seems interessant to me. There is probably other combination of function that can accidently duplicate data. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
removing uneccessary dimension 595813283 | |
610402170 | https://github.com/pydata/xarray/issues/3948#issuecomment-610402170 | https://api.github.com/repos/pydata/xarray/issues/3948 | MDEyOklzc3VlQ29tbWVudDYxMDQwMjE3MA== | lanougue 32069530 | 2020-04-07T13:57:04Z | 2020-04-07T13:57:04Z | NONE | Hi, If results1 is already evaluated, just replace "da1.release()" with "del da1". Python should automatically release the memory |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Releasing memory? 595882590 | |
562654929 | https://github.com/pydata/xarray/issues/2605#issuecomment-562654929 | https://api.github.com/repos/pydata/xarray/issues/2605 | MDEyOklzc3VlQ29tbWVudDU2MjY1NDkyOQ== | lanougue 32069530 | 2019-12-06T17:02:05Z | 2019-12-06T17:02:05Z | NONE | Ho, sorry... I just see the PR... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pad method 390774883 | |
562652648 | https://github.com/pydata/xarray/issues/2605#issuecomment-562652648 | https://api.github.com/repos/pydata/xarray/issues/2605 | MDEyOklzc3VlQ29tbWVudDU2MjY1MjY0OA== | lanougue 32069530 | 2019-12-06T16:56:20Z | 2019-12-06T16:56:20Z | NONE | Hi, I was looking to some xarray padding function and get this issue. For the moment, I made a function of my own based on numpy.pad and xr.apply_ufunc When possible, it also pad associated coordinates. If it can be of any help here... Here it is: ``` def xpad(ds, dims={}): """ Padding of xarray. Coordinate are linearly padded if original coordinates are evenly spaced. Otherwise, no new coordinates are affected to padded axis. Padded dimension is named with prefix 'padded_'
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pad method 390774883 | |
522592263 | https://github.com/pydata/xarray/issues/659#issuecomment-522592263 | https://api.github.com/repos/pydata/xarray/issues/659 | MDEyOklzc3VlQ29tbWVudDUyMjU5MjI2Mw== | lanougue 32069530 | 2019-08-19T14:09:36Z | 2019-08-19T14:09:36Z | NONE | { "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
groupby very slow compared to pandas 117039129 | ||
433393215 | https://github.com/pydata/xarray/issues/2494#issuecomment-433393215 | https://api.github.com/repos/pydata/xarray/issues/2494 | MDEyOklzc3VlQ29tbWVudDQzMzM5MzIxNQ== | lanougue 32069530 | 2018-10-26T12:37:30Z | 2018-10-26T12:37:30Z | NONE | Hi all,
I finally figured out my problem. On each independent process xr.open_mfdataset() seems to naturally try to do some multi-threaded access (even without parallel option ?). Each node of my cluster was configured in such a way that multi-threading was possible (my mistake). Here was my yaml config file used by PBSCluster()
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concurrent acces with multiple processes using open_mfdataset 371906566 | |
431796693 | https://github.com/pydata/xarray/issues/2494#issuecomment-431796693 | https://api.github.com/repos/pydata/xarray/issues/2494 | MDEyOklzc3VlQ29tbWVudDQzMTc5NjY5Mw== | lanougue 32069530 | 2018-10-22T10:27:04Z | 2018-10-22T10:27:04Z | NONE | @jhamman I was aware of the difference between the two parallel options. I was thus wondering if I could pass a parallel option to the netcdf4 library via the open_mfdataset() call. I tried to change the engine to netcdf4 and added the backend_kwarg :
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concurrent acces with multiple processes using open_mfdataset 371906566 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 10