issue_comments
5 rows where author_association = "NONE" and user = 6980561 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- klapo · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
604543399 | https://github.com/pydata/xarray/issues/1519#issuecomment-604543399 | https://api.github.com/repos/pydata/xarray/issues/1519 | MDEyOklzc3VlQ29tbWVudDYwNDU0MzM5OQ== | klapo 6980561 | 2020-03-26T16:50:44Z | 2020-03-26T16:50:44Z | NONE | I'm re-pinging this issue since I was just bitten by it (on version 0.15). If creating an exception is time consuming could we instead have a disclaimer regarding this behavior to the documentation? Here's a basic working example that shows that it does silently fail: ``` Basic working example of the silently failing indexing assignmentds = xr.Dataset({'a': (('x'), np.arange(4)), 'b': (('x'), np.arange(4)) }) Silently fails assignmentds.loc[{'x': 1}]['a'] = 10 print(ds) Works as intendedds['a'].loc[{'x': 1}] = 10 print(ds) ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Assignment 252490115 | |
300545110 | https://github.com/pydata/xarray/issues/1391#issuecomment-300545110 | https://api.github.com/repos/pydata/xarray/issues/1391 | MDEyOklzc3VlQ29tbWVudDMwMDU0NTExMA== | klapo 6980561 | 2017-05-10T16:53:25Z | 2017-05-10T16:53:25Z | NONE | @darothen That sounds great! I think we should be clearer. The issue that @NicWayand and I are highlighting is the coercing observational data, which often comes with some fairly heinous formatting issues, into an xarray format. The stacking of these data along a new dimension is usually the last step in this process, and one that can be frustrating. An example of this in practice can be found in this notebook (please be forgiving, it is one of the first things I ever wrote in python). https://github.com/klapo/CalRad/blob/master/CR.SurfObs.DataIngest.xray.ipynb The data flow looks like this: - read the csv summarizing each station - read data from one set of stations using pandas - clean the data - assign the data in a pandas DataFrame to a dictionary of DataFrames - rinse and repeat for the other set of data - concat the dictionary of DataFrames into a single DataFrame - convert to an xarray DataSet This example is a little ludicrous because I didn't know what I was doing, but I think that's the point. There is a lot of ambiguity on which tools to use at what point. Concatenating a dictionary of DataFrames into a single DataFrame and then converting to a DataSet was the only solution I could get to work, after a lot of trial and error, for putting these data in an xarray DataSet. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Adding Example/Tutorial of importing data to Xarray (Merge/conact/etc) 225536793 | |
300383088 | https://github.com/pydata/xarray/issues/1391#issuecomment-300383088 | https://api.github.com/repos/pydata/xarray/issues/1391 | MDEyOklzc3VlQ29tbWVudDMwMDM4MzA4OA== | klapo 6980561 | 2017-05-10T06:03:20Z | 2017-05-10T06:03:20Z | NONE | Also, just a small thing in the docs for The example includes this snippet
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Adding Example/Tutorial of importing data to Xarray (Merge/conact/etc) 225536793 | |
300381278 | https://github.com/pydata/xarray/issues/1391#issuecomment-300381278 | https://api.github.com/repos/pydata/xarray/issues/1391 | MDEyOklzc3VlQ29tbWVudDMwMDM4MTI3OA== | klapo 6980561 | 2017-05-10T05:52:21Z | 2017-05-10T05:56:15Z | NONE | I have an example that I just struggled through that might be relevant to this idea. I'm running a point model using some arbitrary number of experiments (for the below example there are 28 experiments). Each experiment is opened and then stored in a dictionary ``` resultsDataSet = xr.Dataset() for k in scalar_data_vars: if not 'scalar' in k: continue
print(resultsDataSet)
And here is a helper function that can do this more generally, which I wrote a while back. ``` def combinevars(ds_in, dat_vars, new_dim_name='new_dim', combinevarname='new_var'): ds_out = xr.Dataset() ds_out = xr.concat([ds_in[dv] for dv in dat_vars], dim='new_dim') ds_out = ds_out.rename({'new_dim': new_dim_name}) ds_out.coords[new_dim_name] = dat_vars ds_out.name = combinevarname
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Adding Example/Tutorial of importing data to Xarray (Merge/conact/etc) 225536793 | |
219183082 | https://github.com/pydata/xarray/pull/401#issuecomment-219183082 | https://api.github.com/repos/pydata/xarray/issues/401 | MDEyOklzc3VlQ29tbWVudDIxOTE4MzA4Mg== | klapo 6980561 | 2016-05-13T23:30:10Z | 2016-05-13T23:30:10Z | NONE | I also ran into this problem -- I wanted to save a netcdf with a boolean array. Casting the booleans as ints worked for my application. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Handle bool in NetCDF4 conversion 70805273 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3