issue_comments
12 rows where issue = 166593563 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Speed up operations with xarray dataset · 12 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
269566487 | https://github.com/pydata/xarray/issues/912#issuecomment-269566487 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDI2OTU2NjQ4Nw== | jhamman 2443309 | 2016-12-29T01:07:52Z | 2016-12-29T01:07:52Z | MEMBER | @saulomeirelles - Hopefully, you were able to work through this issue. If not, feel free to reopen. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234056046 | https://github.com/pydata/xarray/issues/912#issuecomment-234056046 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDA1NjA0Ng== | shoyer 1217238 | 2016-07-20T19:29:55Z | 2016-07-20T19:29:55Z | MEMBER | Just looking at a task manager while a task executes can give you a sense of what's going on. Dask also has some diagnostics that may be helpful: http://dask.pydata.org/en/latest/diagnostics.html On Wed, Jul 20, 2016 at 11:44 AM Saulo Meirelles notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234043292 | https://github.com/pydata/xarray/issues/912#issuecomment-234043292 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDA0MzI5Mg== | saulomeirelles 7504461 | 2016-07-20T18:44:53Z | 2016-07-20T18:44:53Z | NONE | No, not really. I got no error message whatsoever. Is there any test I can do to tackle this? Sent from Smartphone. Please forgive typos. On Jul 20, 2016 8:41 PM, "Stephan Hoyer" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234042142 | https://github.com/pydata/xarray/issues/912#issuecomment-234042142 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDA0MjE0Mg== | shoyer 1217238 | 2016-07-20T18:41:17Z | 2016-07-20T18:41:17Z | MEMBER |
Are you running out of memory? Can you tell what's going on? This is a little surprising to me. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234035910 | https://github.com/pydata/xarray/issues/912#issuecomment-234035910 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAzNTkxMA== | saulomeirelles 7504461 | 2016-07-20T18:20:24Z | 2016-07-20T18:20:24Z | NONE | True. I decided to wait for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234026185 | https://github.com/pydata/xarray/issues/912#issuecomment-234026185 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAyNjE4NQ== | shoyer 1217238 | 2016-07-20T17:47:45Z | 2016-07-20T17:47:45Z | MEMBER | It's worth noting that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234022793 | https://github.com/pydata/xarray/issues/912#issuecomment-234022793 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAyMjc5Mw== | saulomeirelles 7504461 | 2016-07-20T17:36:02Z | 2016-07-20T17:36:17Z | NONE | Thanks, @shoyer ! Setting smaller chunks helps, however my issue is the way back. This is fine:
But this:
takes an insane amount of time which intrigues me because is just a vector with 2845 points. Is there another way to tackle this without If |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233998757 | https://github.com/pydata/xarray/issues/912#issuecomment-233998757 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5ODc1Nw== | shoyer 1217238 | 2016-07-20T16:11:27Z | 2016-07-20T16:11:27Z | MEMBER | When you write You will probably be more successful if you try something like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233998071 | https://github.com/pydata/xarray/issues/912#issuecomment-233998071 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5ODA3MQ== | saulomeirelles 7504461 | 2016-07-20T16:08:57Z | 2016-07-20T16:08:57Z | NONE | I've tried to create individual nc-files and then read them all using The Cheers, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233996527 | https://github.com/pydata/xarray/issues/912#issuecomment-233996527 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5NjUyNw== | shoyer 1217238 | 2016-07-20T16:03:30Z | 2016-07-20T16:03:30Z | MEMBER | Thanks for describing that -- I misread your initial description and thought you were using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233995495 | https://github.com/pydata/xarray/issues/912#issuecomment-233995495 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5NTQ5NQ== | saulomeirelles 7504461 | 2016-07-20T16:00:02Z | 2016-07-20T16:00:02Z | NONE | The input files are 2485 nested mat-files that come out from a measurement device. I read them in Python ( ``` matfiles = glob('*sed.mat')
``` Afterwards, I populate the matrices in a loop: ```
def f(i): ``` where ``` def getABSpars(matfile):
``` Using the
Finally I create the xarray dataset and then save into a nc-file: ``` ds = xray.Dataset( { 'conc_profs' : ( ['duration', 'z', 'burst'], ConcProf ), 'grainSize_profs' : ( ['duration', 'z', 'burst'], GsizeProf ), 'burst_duration' : ( ['duration'], np.linspace(0,299, Time.shape[0]) ), }, coords = {'time' : (['duration', 'burst'], Time) , 'zdist' : (['z'], Dist), 'burst_nr' : (['burst'], Burst) } ) ds.to_netcdf('ABS_conc_size_12m.nc' , mode='w') ``` It costs me around 1 h to generate the nc-file. Could this be the reason of my headaches? Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233991357 | https://github.com/pydata/xarray/issues/912#issuecomment-233991357 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5MTM1Nw== | shoyer 1217238 | 2016-07-20T15:46:50Z | 2016-07-20T15:46:50Z | MEMBER | What do the original input files look like, before you join them together? This may be a case where the dask.array task scheduler does very poorly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3