issue_comments: 326068119
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1534#issuecomment-326068119 | https://api.github.com/repos/pydata/xarray/issues/1534 | 326068119 | MDEyOklzc3VlQ29tbWVudDMyNjA2ODExOQ== | 23199378 | 2017-08-30T17:50:11Z | 2017-08-30T17:50:11Z | NONE | Many thanks! I will try this. OK, since you asked for more details: I have used xarray resample successfully on a file with ~3 million single ping ADCP ensembles, 17 variables of these with 1D and 2D data. Three lines of code in a handful of minutes to reduce that. On a middling laptop. Amazing. Unintended behaviors from resample that I need to figure out: On the menu next to learn/use: -- Multi-indexing and data reshaping -- slicing -- Separating an uneven time base into separate time series -- Calculations involving several of the variables at the same time (e.g. using xarray to perform the ADCP beam to earth rotations) Where is all this going? Ultimately to produce mean current profile time series for data release (think hourly time-depth shaped dataframes) and bursts of hourly data (think hourly time-depth-sample shaped dataframes) on which to perform a variety of wave analysis. This is my learn-python project, so apologies for the non-pythonic approach. I also need to preserve backwards compatibility with existing code and conventions (EPIC, historically, CF and thredds, going forward). The project is here: https://github.com/mmartini-usgs/ADCPy |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
253407851 |