issue_comments
5 rows where issue = 166593563 and user = 7504461 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Speed up operations with xarray dataset · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
234043292 | https://github.com/pydata/xarray/issues/912#issuecomment-234043292 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDA0MzI5Mg== | saulomeirelles 7504461 | 2016-07-20T18:44:53Z | 2016-07-20T18:44:53Z | NONE | No, not really. I got no error message whatsoever. Is there any test I can do to tackle this? Sent from Smartphone. Please forgive typos. On Jul 20, 2016 8:41 PM, "Stephan Hoyer" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234035910 | https://github.com/pydata/xarray/issues/912#issuecomment-234035910 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAzNTkxMA== | saulomeirelles 7504461 | 2016-07-20T18:20:24Z | 2016-07-20T18:20:24Z | NONE | True. I decided to wait for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234022793 | https://github.com/pydata/xarray/issues/912#issuecomment-234022793 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAyMjc5Mw== | saulomeirelles 7504461 | 2016-07-20T17:36:02Z | 2016-07-20T17:36:17Z | NONE | Thanks, @shoyer ! Setting smaller chunks helps, however my issue is the way back. This is fine:
But this:
takes an insane amount of time which intrigues me because is just a vector with 2845 points. Is there another way to tackle this without If |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233998071 | https://github.com/pydata/xarray/issues/912#issuecomment-233998071 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5ODA3MQ== | saulomeirelles 7504461 | 2016-07-20T16:08:57Z | 2016-07-20T16:08:57Z | NONE | I've tried to create individual nc-files and then read them all using The Cheers, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233995495 | https://github.com/pydata/xarray/issues/912#issuecomment-233995495 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5NTQ5NQ== | saulomeirelles 7504461 | 2016-07-20T16:00:02Z | 2016-07-20T16:00:02Z | NONE | The input files are 2485 nested mat-files that come out from a measurement device. I read them in Python ( ``` matfiles = glob('*sed.mat')
``` Afterwards, I populate the matrices in a loop: ```
def f(i): ``` where ``` def getABSpars(matfile):
``` Using the
Finally I create the xarray dataset and then save into a nc-file: ``` ds = xray.Dataset( { 'conc_profs' : ( ['duration', 'z', 'burst'], ConcProf ), 'grainSize_profs' : ( ['duration', 'z', 'burst'], GsizeProf ), 'burst_duration' : ( ['duration'], np.linspace(0,299, Time.shape[0]) ), }, coords = {'time' : (['duration', 'burst'], Time) , 'zdist' : (['z'], Dist), 'burst_nr' : (['burst'], Burst) } ) ds.to_netcdf('ABS_conc_size_12m.nc' , mode='w') ``` It costs me around 1 h to generate the nc-file. Could this be the reason of my headaches? Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1