issue_comments
8 rows where user = 7504461 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- saulomeirelles · 8 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
234043292 | https://github.com/pydata/xarray/issues/912#issuecomment-234043292 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDA0MzI5Mg== | saulomeirelles 7504461 | 2016-07-20T18:44:53Z | 2016-07-20T18:44:53Z | NONE | No, not really. I got no error message whatsoever. Is there any test I can do to tackle this? Sent from Smartphone. Please forgive typos. On Jul 20, 2016 8:41 PM, "Stephan Hoyer" notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234035910 | https://github.com/pydata/xarray/issues/912#issuecomment-234035910 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAzNTkxMA== | saulomeirelles 7504461 | 2016-07-20T18:20:24Z | 2016-07-20T18:20:24Z | NONE | True. I decided to wait for |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
234022793 | https://github.com/pydata/xarray/issues/912#issuecomment-234022793 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzNDAyMjc5Mw== | saulomeirelles 7504461 | 2016-07-20T17:36:02Z | 2016-07-20T17:36:17Z | NONE | Thanks, @shoyer ! Setting smaller chunks helps, however my issue is the way back. This is fine:
But this:
takes an insane amount of time which intrigues me because is just a vector with 2845 points. Is there another way to tackle this without If |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233998071 | https://github.com/pydata/xarray/issues/912#issuecomment-233998071 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5ODA3MQ== | saulomeirelles 7504461 | 2016-07-20T16:08:57Z | 2016-07-20T16:08:57Z | NONE | I've tried to create individual nc-files and then read them all using The Cheers, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
233995495 | https://github.com/pydata/xarray/issues/912#issuecomment-233995495 | https://api.github.com/repos/pydata/xarray/issues/912 | MDEyOklzc3VlQ29tbWVudDIzMzk5NTQ5NQ== | saulomeirelles 7504461 | 2016-07-20T16:00:02Z | 2016-07-20T16:00:02Z | NONE | The input files are 2485 nested mat-files that come out from a measurement device. I read them in Python ( ``` matfiles = glob('*sed.mat')
``` Afterwards, I populate the matrices in a loop: ```
def f(i): ``` where ``` def getABSpars(matfile):
``` Using the
Finally I create the xarray dataset and then save into a nc-file: ``` ds = xray.Dataset( { 'conc_profs' : ( ['duration', 'z', 'burst'], ConcProf ), 'grainSize_profs' : ( ['duration', 'z', 'burst'], GsizeProf ), 'burst_duration' : ( ['duration'], np.linspace(0,299, Time.shape[0]) ), }, coords = {'time' : (['duration', 'burst'], Time) , 'zdist' : (['z'], Dist), 'burst_nr' : (['burst'], Burst) } ) ds.to_netcdf('ABS_conc_size_12m.nc' , mode='w') ``` It costs me around 1 h to generate the nc-file. Could this be the reason of my headaches? Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up operations with xarray dataset 166593563 | |
231021167 | https://github.com/pydata/xarray/issues/364#issuecomment-231021167 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDIzMTAyMTE2Nw== | saulomeirelles 7504461 | 2016-07-07T08:54:46Z | 2016-07-07T08:59:15Z | NONE | Thanks, @shoyer ! Here is an example of how I circumvented the problem:
In my case, the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
228723336 | https://github.com/pydata/xarray/issues/364#issuecomment-228723336 | https://api.github.com/repos/pydata/xarray/issues/364 | MDEyOklzc3VlQ29tbWVudDIyODcyMzMzNg== | saulomeirelles 7504461 | 2016-06-27T11:45:09Z | 2016-06-27T11:45:09Z | NONE | This is a very useful functionality. I am wondering if I can specify the time window, for example, like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
pd.Grouper support? 60303760 | |
150618114 | https://github.com/pydata/xarray/issues/191#issuecomment-150618114 | https://api.github.com/repos/pydata/xarray/issues/191 | MDEyOklzc3VlQ29tbWVudDE1MDYxODExNA== | saulomeirelles 7504461 | 2015-10-23T16:00:26Z | 2015-10-23T16:00:59Z | NONE | Hi All, This is indeed an excellent project with great potential! I am wondering if there is any progress on the interpolation issue. I am working with an irregular time series which I would pretty much like to upsample using xray. Thanks for all the effort! Saulo |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
interpolate/sample array at point 38849807 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3