home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where user = 19403647 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

issue 3

  • Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 1
  • win_type for rolling() ? 1
  • Adding NCL colortables to xarray 1

user 1

  • serazing · 3 ✖

author_association 1

  • NONE 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
369309422 https://github.com/pydata/xarray/issues/1948#issuecomment-369309422 https://api.github.com/repos/pydata/xarray/issues/1948 MDEyOklzc3VlQ29tbWVudDM2OTMwOTQyMg== serazing 19403647 2018-02-28T17:07:23Z 2018-02-28T17:07:23Z NONE

Alright, this might be a better idea. I'll try to suggest this functionality to matplotlib first.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Adding NCL colortables to xarray 301013548
266032884 https://github.com/pydata/xarray/issues/1142#issuecomment-266032884 https://api.github.com/repos/pydata/xarray/issues/1142 MDEyOklzc3VlQ29tbWVudDI2NjAzMjg4NA== serazing 19403647 2016-12-09T14:56:35Z 2016-12-09T14:56:35Z NONE

Hi, I have taken another approach for using nd window over several dimensions of xarray objects to perform filtering and tapering, based on scipy.ndimage, scipy.signal and dask.map_overlap. @shoyer @jhamman it is somewhat similar to what I have presented during the aospy meeting. It also refers to the issue #819.

For the moment, I have something that works like this : ``` shape = (50, 30, 40) dims = ('x', 'y', 'z') dummy_array = xr.DataArray(np.random.random(shape), dims=dims)

Define and set a window object

w = dummy_array.window w.set(n={'x':24, 'y':24}, cutoff={'x':0.01, 'y':0.01}, window='hanning') `` where nis the filter order (i.e. the size),cutoffis the cutoff frequency,windowis any window name that can be found in thescipy.signal.windows` collection.

Then the filtering can be perform using the w.convolve() method, which build a dask graph for the convolution product.

I also want to add a tapering method 'w.taper()' which would be useful for spectral analysis. For multi-tapering, it should also generate an object with an additional dimension corresponding to the number of windows. To do that, I first need to handle the window building using dask.

Let me know if you are interesting in this approach. For the moment, I have planned to upload a github project for signal processing tools in the framework of pangeo-data. It sould be online by the end of December and I will happy to have feedback on it. I am not sure it falls into the xarray framework and it may need a dedicated project, but I might be wrong.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  win_type for rolling() ? 192248351
260379241 https://github.com/pydata/xarray/issues/1115#issuecomment-260379241 https://api.github.com/repos/pydata/xarray/issues/1115 MDEyOklzc3VlQ29tbWVudDI2MDM3OTI0MQ== serazing 19403647 2016-11-14T16:10:55Z 2016-11-14T16:10:55Z NONE

I agree with @rabernat in the sense that it could be part of another package (e.g., signal processing). This would also allow the computation of statistical test to assess the significance of the correlation (which is useful since correlation may often be misinterpreted without statistical tests).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.958ms · About: xarray-datasette