issue_comments: 233995495
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/912#issuecomment-233995495 | https://api.github.com/repos/pydata/xarray/issues/912 | 233995495 | MDEyOklzc3VlQ29tbWVudDIzMzk5NTQ5NQ== | 7504461 | 2016-07-20T16:00:02Z | 2016-07-20T16:00:02Z | NONE | The input files are 2485 nested mat-files that come out from a measurement device. I read them in Python ( ``` matfiles = glob('*sed.mat')
``` Afterwards, I populate the matrices in a loop: ```
def f(i): ``` where ``` def getABSpars(matfile):
``` Using the
Finally I create the xarray dataset and then save into a nc-file: ``` ds = xray.Dataset( { 'conc_profs' : ( ['duration', 'z', 'burst'], ConcProf ), 'grainSize_profs' : ( ['duration', 'z', 'burst'], GsizeProf ), 'burst_duration' : ( ['duration'], np.linspace(0,299, Time.shape[0]) ), }, coords = {'time' : (['duration', 'burst'], Time) , 'zdist' : (['z'], Dist), 'burst_nr' : (['burst'], Burst) } ) ds.to_netcdf('ABS_conc_size_12m.nc' , mode='w') ``` It costs me around 1 h to generate the nc-file. Could this be the reason of my headaches? Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
166593563 |