issue_comments
17 rows where user = 8453445 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- chiaral · 17 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1499591643 | https://github.com/pydata/xarray/issues/3216#issuecomment-1499591643 | https://api.github.com/repos/pydata/xarray/issues/3216 | IC_kwDOAMm_X85ZYfPb | chiaral 8453445 | 2023-04-06T20:34:19Z | 2023-04-06T20:34:47Z | CONTRIBUTOR | Hello!
Just adding a 👍 to this thread - and, since it is an old issue, wondering if this is on |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: time-based rolling window functionality 480753417 | |
947906426 | https://github.com/pydata/xarray/issues/5877#issuecomment-947906426 | https://api.github.com/repos/pydata/xarray/issues/5877 | IC_kwDOAMm_X844f-d6 | chiaral 8453445 | 2021-10-20T17:59:13Z | 2021-10-20T17:59:13Z | CONTRIBUTOR | Yup - just followed your suggestion and: 1) conda removed and now the array([ nan, nan, 0. , 0. , 0. , 0. , 0. , 0.31 , 1.23 , 9.530001 , 10.64 , 9.75 , 2.67 , 1.35 , 1.46 , 0.36999997, 0.26999998, 0.25 , 0.14999999, 2.68 , 2.56 , 2.73 , 0.39999998, 0.39999998, 0.19999999, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], dtype=float32) could you elaborate more on the issue? is this because of some bouncing between precisions across packages? But why do I have zeros at the beginning of the rolling sum and non zeros after having calculated a sum? it is not consistent in the behaviour. Thanks tho! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Rolling() gives values different from pd.rolling() 1030768250 | |
947195221 | https://github.com/pydata/xarray/issues/5877#issuecomment-947195221 | https://api.github.com/repos/pydata/xarray/issues/5877 | IC_kwDOAMm_X844dQ1V | chiaral 8453445 | 2021-10-20T00:02:58Z | 2021-10-20T00:02:58Z | CONTRIBUTOR | Adding a few extra observations:
But when I switch to other operations, like whereas
array([[ nan, nan, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.78978585e-01, 4.68081166e-01, 4.44740760e+00, 4.12409195e+00, 4.42830679e+00, 7.51465227e-01, 6.67757461e-01, 6.35400157e-01, 2.08166670e-02, 7.81024957e-02, 7.23417792e-02, 6.24499786e-02, 1.41810905e+00, 1.45211339e+00, 1.40652052e+00, 1.15470047e-01, 1.15470047e-01, 1.15470047e-01, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08, 9.60572442e-08]]) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Rolling() gives values different from pd.rolling() 1030768250 | |
758204109 | https://github.com/pydata/xarray/issues/4793#issuecomment-758204109 | https://api.github.com/repos/pydata/xarray/issues/4793 | MDEyOklzc3VlQ29tbWVudDc1ODIwNDEwOQ== | chiaral 8453445 | 2021-01-11T20:29:15Z | 2021-01-11T20:29:15Z | CONTRIBUTOR | Great - I will plan on modifying it using the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
More advanced tutorial on how to manipulate facetgrid 783630055 | |
446373483 | https://github.com/pydata/xarray/issues/2380#issuecomment-446373483 | https://api.github.com/repos/pydata/xarray/issues/2380 | MDEyOklzc3VlQ29tbWVudDQ0NjM3MzQ4Mw== | chiaral 8453445 | 2018-12-11T21:43:53Z | 2018-12-11T21:43:53Z | CONTRIBUTOR | I have found a workaround, I think, in the last item of this issue. You have to set it before running it in xarray. https://github.com/NCAR/pynio/issues/19 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot specify options for pynio engine through backend_kwargs of open_dataset/open_dataarray 353566871 | |
418420696 | https://github.com/pydata/xarray/issues/1844#issuecomment-418420696 | https://api.github.com/repos/pydata/xarray/issues/1844 | MDEyOklzc3VlQ29tbWVudDQxODQyMDY5Ng== | chiaral 8453445 | 2018-09-04T15:53:10Z | 2018-09-04T15:53:10Z | CONTRIBUTOR | Thanks - i will give this a try! And thanks for the clarifications. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to broadcast along dayofyear 290023410 | |
418175182 | https://github.com/pydata/xarray/issues/1844#issuecomment-418175182 | https://api.github.com/repos/pydata/xarray/issues/1844 | MDEyOklzc3VlQ29tbWVudDQxODE3NTE4Mg== | chiaral 8453445 | 2018-09-03T18:38:47Z | 2018-09-03T18:38:47Z | CONTRIBUTOR | Yes, @spencerkclark that was my initial intent. I - for some reasons, and I understand I was wrong about it, - thought that dayoftheyear would align the days always on the same grid. To be honest I have never used it until now, so I wasn't sure how it worked. I was just surprised by that behavior, which I understand is intended. It is just not explained well IMHO. If we calculate the daily climatology, the 366th day is the 31st of december of every 4 years, right? it just wasn't exactly what I expected, so I thought to put a note in this issue, which popped up when I was looking for some more details about this attribute. Said so - is there a more suitable attribute for what I want to do? This is maybe not the best place to discuss about that, I can send an email to the mailing list. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to broadcast along dayofyear 290023410 | |
417437968 | https://github.com/pydata/xarray/issues/1844#issuecomment-417437968 | https://api.github.com/repos/pydata/xarray/issues/1844 | MDEyOklzc3VlQ29tbWVudDQxNzQzNzk2OA== | chiaral 8453445 | 2018-08-30T19:24:46Z | 2018-08-30T19:24:46Z | CONTRIBUTOR | I am commenting on this issue, because my findings seem relevant to this example. I have just encountered an unexpected (to me) behavior of dayofyear. I have a dataset, ds:
S is my time coordinate. It is daily, but not continuous
For example for 1999 first three months: ``` ds.S.sel(S=slice('1999-01-01','1999-03-05')) <xarray.DataArray 'S' (S: 13)> array(['1999-01-01T12:00:00.000000000', '1999-01-06T12:00:00.000000000', '1999-01-11T12:00:00.000000000', '1999-01-16T12:00:00.000000000', '1999-01-21T12:00:00.000000000', '1999-01-26T12:00:00.000000000', '1999-01-31T12:00:00.000000000', '1999-02-05T12:00:00.000000000', '1999-02-10T12:00:00.000000000', '1999-02-15T12:00:00.000000000', '1999-02-20T12:00:00.000000000', '1999-02-25T12:00:00.000000000', '1999-03-02T12:00:00.000000000'], dtype='datetime64[ns]') Coordinates: * S (S) datetime64[ns] 1999-01-01T12:00:00 1999-01-06T12:00:00 ... ``` and for 2008: ``` broadcasted_data.S.sel(S=slice('2008-01-01','2008-03-05')) <xarray.DataArray 'S' (S: 13)> array(['2008-01-01T12:00:00.000000000', '2008-01-06T12:00:00.000000000', '2008-01-11T12:00:00.000000000', '2008-01-16T12:00:00.000000000', '2008-01-21T12:00:00.000000000', '2008-01-26T12:00:00.000000000', '2008-01-31T12:00:00.000000000', '2008-02-05T12:00:00.000000000', '2008-02-10T12:00:00.000000000', '2008-02-15T12:00:00.000000000', '2008-02-20T12:00:00.000000000', '2008-02-25T12:00:00.000000000', '2008-03-02T12:00:00.000000000'], dtype='datetime64[ns]') Coordinates: * S (S) datetime64[ns] 2008-01-01T12:00:00 2008-01-06T12:00:00 ... ``` Please note, within the non leap (1999) or leap (2008) years, the days are the same. There are 73 S values per year. However when I groupby('S.dayofyear') things are not aligned anymore starting from March. For example, if I groupby() and print the value of dayofyear and the grouped values: ``` for k, gg in ds.groupby('S.dayofyear'): print(k) print(gg) ..... 51 ## 51st day of the year <xarray.Dataset> Dimensions: (L: 45, S: 16) Coordinates: * S (S) datetime64[ns] 1999-02-20T12:00:00 2000-02-20T12:00:00 ... * L (L) float64 0.0 24.0 48.0 72.0 96.0 120.0 144.0 168.0 192.0 ... Data variables: pr (S, L) float32 2.8822698e-05 3.1478736e-05 3.707411e-05 ... truth (S, L) float32 2.8387214e-05 2.8993465e-05 2.8109233e-05 ... 56 ## 56st day of the year <xarray.Dataset> Dimensions: (L: 45, S: 16) Coordinates: * S (S) datetime64[ns] 1999-02-25T12:00:00 2000-02-25T12:00:00 ... * L (L) float64 0.0 24.0 48.0 72.0 96.0 120.0 144.0 168.0 192.0 ... Data variables: pr (S, L) float32 3.5827405e-05 2.27847e-05 2.8826753e-05 ... truth (S, L) float32 2.9589286e-05 2.6589936e-05 2.7626802e-05 ... ``` up to here everything looks good, I have 16 values (one for each year of data) for each day of the year, but starting with March 2nd, they start getting split in two groups: ``` 61 ## 61st day of the year <xarray.Dataset> Dimensions: (L: 45, S: 12) Coordinates: * S (S) datetime64[ns] 1999-03-02T12:00:00 2001-03-02T12:00:00 ... * L (L) float64 0.0 24.0 48.0 72.0 96.0 120.0 144.0 168.0 192.0 ... Data variables: pr (S, L) float32 2.2245076e-05 2.9928206e-05 3.2708682e-05 ... truth (S, L) float32 2.5899697e-05 2.5815236e-05 2.6628013e-05 ... 62## 62nd day of the year <xarray.Dataset> Dimensions: (L: 45, S: 4) Coordinates: * S (S) datetime64[ns] 2000-03-02T12:00:00 2004-03-02T12:00:00 ... * L (L) float64 0.0 24.0 48.0 72.0 96.0 120.0 144.0 168.0 192.0 ... Data variables: pr (S, L) float32 2.3905726e-05 2.1646814e-05 1.5209519e-05 ... truth (S, L) float32 2.4452387e-05 2.5048954e-05 2.5876538e-05 ... 66## 66th day of the year <xarray.Dataset> Dimensions: (L: 45, S: 12) Coordinates: * S (S) datetime64[ns] 1999-03-07T12:00:00 2001-03-07T12:00:00 ... * L (L) float64 0.0 24.0 48.0 72.0 96.0 120.0 144.0 168.0 192.0 ... Data variables: pr (S, L) float32 2.60827e-05 4.9364742e-05 3.838778e-05 ... truth (S, L) float32 2.6537613e-05 2.7840171e-05 2.7700215e-05 ... 67## 67th day of the year <xarray.Dataset> Dimensions: (L: 45, S: 4) Coordinates: * S (S) datetime64[ns] 2000-03-07T12:00:00 2004-03-07T12:00:00 ... * L (L) float64 0.0 24.0 48.0 72.0 96.0 120.0 144.0 168.0 192.0 ... Data variables: pr (S, L) float32 1.59269e-05 2.7056101e-05 1.8332774e-05 ... truth (S, L) float32 2.1952277e-05 2.7667278e-05 2.5342364e-05 ... ``` and so on. This was unexpected to me. And not well document. It means that, especially when we calculate anomalies, we might not be aligning things correctly? or am I wrong? Is there a way to group the data by the day of the year so that everything is grouped on 366 days? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to broadcast along dayofyear 290023410 | |
397016711 | https://github.com/pydata/xarray/issues/2232#issuecomment-397016711 | https://api.github.com/repos/pydata/xarray/issues/2232 | MDEyOklzc3VlQ29tbWVudDM5NzAxNjcxMQ== | chiaral 8453445 | 2018-06-13T17:16:45Z | 2018-06-13T17:16:45Z | CONTRIBUTOR | I think that if the correct way should be to use 'QS-DEC' and not 'Q-NOV' - and this was definitely true. However I run also 0.10.2 - not sure if it was fixed in the latest versions. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
resample monthly to seasonal docstring example is wrong 332077435 | |
390670582 | https://github.com/pydata/xarray/issues/2159#issuecomment-390670582 | https://api.github.com/repos/pydata/xarray/issues/2159 | MDEyOklzc3VlQ29tbWVudDM5MDY3MDU4Mg== | chiaral 8453445 | 2018-05-21T14:28:08Z | 2018-05-21T14:28:08Z | CONTRIBUTOR | Thanks for opening up this issue. This would be very helpful for the forecasting community as well, where we usually concatenate along Start time and Lead time dimensions. Here, however, was mentioned that it is quite difficult to generalize it, and he suggested a workaround. I know that some people did it for specific datasets, so maybe it would be helpful to add an example to the documentation that shows how this can be implemented on a case by case basis? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Concatenate across multiple dimensions with open_mfdataset 324350248 | |
385016657 | https://github.com/pydata/xarray/issues/2055#issuecomment-385016657 | https://api.github.com/repos/pydata/xarray/issues/2055 | MDEyOklzc3VlQ29tbWVudDM4NTAxNjY1Nw== | chiaral 8453445 | 2018-04-27T16:04:12Z | 2018-04-27T16:04:12Z | CONTRIBUTOR | Great, will do. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation on assign a value and vectorized indexing 314239017 | |
385016272 | https://github.com/pydata/xarray/issues/2055#issuecomment-385016272 | https://api.github.com/repos/pydata/xarray/issues/2055 | MDEyOklzc3VlQ29tbWVudDM4NTAxNjI3Mg== | chiaral 8453445 | 2018-04-27T16:02:44Z | 2018-04-27T16:02:44Z | CONTRIBUTOR |
I think this is not correct. the where you linked (or at least the way it is used) is for masking. In my example uses xarray.where() to assign values. but again, I might be off, i have a limited understanding of this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation on assign a value and vectorized indexing 314239017 | |
385000519 | https://github.com/pydata/xarray/issues/2055#issuecomment-385000519 | https://api.github.com/repos/pydata/xarray/issues/2055 | MDEyOklzc3VlQ29tbWVudDM4NTAwMDUxOQ== | chiaral 8453445 | 2018-04-27T15:12:39Z | 2018-04-27T15:12:39Z | CONTRIBUTOR | For example, using the tutorial data: ``` ds = xr.tutorial.load_dataset('air_temperature') add an empty 2D dataarrayds['empty']= xr.full_like(ds.air.mean('time'),fill_value=0) modify one grid point, using where() or loc()ds['empty'] = xr.where((ds.coords['lat']==20)&(ds.coords['lon']==260), 100, ds['empty']) ds['empty'].loc[dict(lon=260, lat=30)] = 100 modify an area with where() and a maskmask = (ds.coords['lat']>20)&(ds.coords['lat']<60)&(ds.coords['lon']>220)&(ds.coords['lon']<260) ds['empty'] = xr.where(mask, 100, ds['empty']) modify an area with loc()lc = ds.coords['lon'] la = ds.coords['lat'] ds['empty'].loc[dict(lon=lc[(lc>220)&(lc<260)], lat=la[(la>20)&(la<60)])] = 100 ``` these are examples that I am pretty sure are not on the website, they are I think common in climate scientists workflow, and that it took me quite a while to figure out. I was using a boolean dataarray as well as in the SO example, slowing down my work of quite a bit. Do they make sense? I can try and add them to the documentation at Assigning Values with indexing , or is there another place that is more relevant? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation on assign a value and vectorized indexing 314239017 | |
384989608 | https://github.com/pydata/xarray/issues/2055#issuecomment-384989608 | https://api.github.com/repos/pydata/xarray/issues/2055 | MDEyOklzc3VlQ29tbWVudDM4NDk4OTYwOA== | chiaral 8453445 | 2018-04-27T14:36:45Z | 2018-04-27T14:36:45Z | CONTRIBUTOR | I finally had the time to try out this SO suggestion on assigning on multiple dimensions as well (imaging being in need to modify the forcing of a model for a selected area) and it works. These are quite peculiar ways (at least for people not deep into xarray...) to assign values; I am compiling a list of them which IMHO should be added somewhere in the help. I will post them here for discussion, and to make sure they are indeed the most correct way to do it! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation on assign a value and vectorized indexing 314239017 | |
381254511 | https://github.com/pydata/xarray/issues/2055#issuecomment-381254511 | https://api.github.com/repos/pydata/xarray/issues/2055 | MDEyOklzc3VlQ29tbWVudDM4MTI1NDUxMQ== | chiaral 8453445 | 2018-04-13T20:38:09Z | 2018-04-13T20:38:09Z | CONTRIBUTOR | Regarding B) I think that the current text can lead to confusion:
because selecting and assigning are discussed together. I think that should be fixed too. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Documentation on assign a value and vectorized indexing 314239017 | |
377249450 | https://github.com/pydata/xarray/issues/2023#issuecomment-377249450 | https://api.github.com/repos/pydata/xarray/issues/2023 | MDEyOklzc3VlQ29tbWVudDM3NzI0OTQ1MA== | chiaral 8453445 | 2018-03-29T14:16:39Z | 2018-03-29T14:16:39Z | CONTRIBUTOR | (short introduction: I created this issue, but I didn't realize I was logged in into another account)
I am not sure I have a constructive comment on how to name it. how about just "q", since that is the name of the parameter? too short? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
quantile method returns quantile coordinates which can raise issues 309378665 | |
254904584 | https://github.com/pydata/xarray/issues/1051#issuecomment-254904584 | https://api.github.com/repos/pydata/xarray/issues/1051 | MDEyOklzc3VlQ29tbWVudDI1NDkwNDU4NA== | chiaral 8453445 | 2016-10-19T18:44:41Z | 2016-10-19T18:44:41Z | CONTRIBUTOR | I will be happy to add these info to the documentation. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
to_netcdf Documentation 183715595 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 10