issue_comments: 601932932
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3873#issuecomment-601932932 | https://api.github.com/repos/pydata/xarray/issues/3873 | 601932932 | MDEyOklzc3VlQ29tbWVudDYwMTkzMjkzMg== | 3274 | 2020-03-20T22:14:37Z | 2020-03-20T22:14:37Z | CONTRIBUTOR | @keewis -- Yes, that is what I ended up with: making a multi-indexed pandas data array first. But I still think it would be helpful to have this information in some of the tutorial for xarray. Also, one thing that can happen that could be addressed in this process is a non-unique multi-index. I did this with a set of experimental data, and then Note that this isn't necessarily pathological: This happened to me because there was oversampling of some data points in my data set. So it would be very helpful for the tutorial to address this -- if you have multiple samples at the same point in the condition space, how do you add arbitrary indexing so that you can successfully translate from a data frame with non-unique indexing to an xarray. Is there some way to do this automatically, so we have the equivalent of Even the diagnostic process is a bit of a nuisance:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
585323675 |