issue_comments
11 rows where user = 6334793 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- hrishikeshac · 11 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
549511089 | https://github.com/pydata/xarray/issues/1115#issuecomment-549511089 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDU0OTUxMTA4OQ== | hrishikeshac 6334793 | 2019-11-04T19:31:46Z | 2019-11-04T19:31:46Z | NONE | Guys sorry for dropping the ball on this one. I made some changes to the PR based on the feedback I got, but I couldn't figure out the tests. Would anyone like to take this over? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
452782113 | https://github.com/pydata/xarray/pull/2652#issuecomment-452782113 | https://api.github.com/repos/pydata/xarray/issues/2652 | MDEyOklzc3VlQ29tbWVudDQ1Mjc4MjExMw== | hrishikeshac 6334793 | 2019-01-09T17:32:12Z | 2019-01-09T17:32:12Z | NONE |
I always assumed an |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
cov() and corr() 396102183 | |
451602947 | https://github.com/pydata/xarray/issues/1115#issuecomment-451602947 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDQ1MTYwMjk0Nw== | hrishikeshac 6334793 | 2019-01-04T23:48:54Z | 2019-01-04T23:48:54Z | NONE | PR done! Changed np.sum() to dataarray.sum() |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
451602256 | https://github.com/pydata/xarray/pull/2652#issuecomment-451602256 | https://api.github.com/repos/pydata/xarray/issues/2652 | MDEyOklzc3VlQ29tbWVudDQ1MTYwMjI1Ng== | hrishikeshac 6334793 | 2019-01-04T23:44:10Z | 2019-01-04T23:44:10Z | NONE | Made the code PEP8 compatible. Apologies for not doing so earlier. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
cov() and corr() 396102183 | |
451052107 | https://github.com/pydata/xarray/issues/1115#issuecomment-451052107 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDQ1MTA1MjEwNw== | hrishikeshac 6334793 | 2019-01-03T04:10:35Z | 2019-01-03T04:14:54Z | NONE | Okay. Here's what I have come up with. I have tested it against two 1-d dataarrays, 2 N-D dataarrays, and one 1-D, and another N-D dataarrays, all cases having misaligned and having missing values. Before going forward, 1. What do you think of it? Any improvements? 2. Steps 1 and 2 (broadcasting and ignoring common missing values) are identical in both cov() and corr(). Is there a better way to reduce the duplication while still retaining both functions as standalone? ``` def cov(self, other, dim = None): """Compute covariance between two DataArray objects along a shared dimension.
def corr(self, other, dim = None): """Compute correlation between two DataArray objects along a shared dimension.
``` For testing: ``` # self: Load demo data and trim it's size ds = xr.tutorial.load_dataset('air_temperature') air = ds.air[:18,...] # other: select missaligned data, and smooth it to dampen the correlation with self. air_smooth = ds.air[2:20,...].rolling(time= 3, center=True).mean(dim='time') #. # A handy function to select an example grid def select_pts(da): return da.sel(lat=45, lon=250)
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
445390271 | https://github.com/pydata/xarray/issues/1115#issuecomment-445390271 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDQ0NTM5MDI3MQ== | hrishikeshac 6334793 | 2018-12-07T22:53:06Z | 2018-12-07T22:53:06Z | NONE | Okay. I am writing the simultaneous correlation and covariance functions on dataxarray.py instead of dataset.py- following the pd.Series.corr(self, other, dim) style. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
442994118 | https://github.com/pydata/xarray/issues/1115#issuecomment-442994118 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDQ0Mjk5NDExOA== | hrishikeshac 6334793 | 2018-11-29T21:09:55Z | 2018-11-29T21:09:55Z | NONE | Sorry for the radio silence- I will work on this next week. Thanks @max-sixty for the updates, @rabernat for reaching out, will let you know if I need help. Should we keep it simple following @max-sixty , or should I also add the functionality to handle lagged correlations? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
419501548 | https://github.com/pydata/xarray/issues/1115#issuecomment-419501548 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDQxOTUwMTU0OA== | hrishikeshac 6334793 | 2018-09-07T16:55:13Z | 2018-09-07T16:55:13Z | NONE | @max-sixty thanks! Then I will start with testing @shoyer 's suggestion and |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
418406658 | https://github.com/pydata/xarray/issues/1115#issuecomment-418406658 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDQxODQwNjY1OA== | hrishikeshac 6334793 | 2018-09-04T15:15:35Z | 2018-09-04T15:15:35Z | NONE | Sometime back I wrote a package based on xarray regarding this. I would be happy to be involved in implementing it in xarray as well, but I am new to contributing to such a large-scale project and it looks a bit intimidating! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 | |
375726695 | https://github.com/pydata/xarray/issues/2009#issuecomment-375726695 | https://api.github.com/repos/pydata/xarray/issues/2009 | MDEyOklzc3VlQ29tbWVudDM3NTcyNjY5NQ== | hrishikeshac 6334793 | 2018-03-23T16:40:06Z | 2018-03-23T16:43:45Z | NONE | @mathause Thanks! Your solution worked brilliantly when used with contourf(). Here's how the code looks after implementing it. ```python import cartopy.crs as ccrs import xarray as xr import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec projection = False ts = xr.tutorial.load_dataset('air_temperature').air[0, ...] ncols, nrows = 2, 2 fig = plt.figure() ny,nx = ts.shapedx, dy = nx/ny, 1figsize = plt.figaspect(float(dy * ncols) / float(dx * nrows))fig = plt.figure(figsize=figsize)gs = gridspec.GridSpec(ncols, nrows,wspace=0, hspace=0) def set_map_layout(axes, width=17.0): """ set figure height, given width
for i in range(4): if projection: ax = plt.subplot(gs[i], projection=ccrs.PlateCarree()) ax.coastlines() ts.plot.contourf(ax=ax, add_colorbar=False, add_labels=False, levels=11, transform=ccrs.PlateCarree()) else: ax = plt.subplot(gs[i]) ts.plot.contourf(ax=ax, add_colorbar=False, levels=11, add_labels=False)
fig.subplots_adjust()```
without projection:
With projection:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Removing inter-subplot spaces when using cartopy projections 307903558 | |
331686038 | https://github.com/pydata/xarray/issues/1115#issuecomment-331686038 | https://api.github.com/repos/pydata/xarray/issues/1115 | MDEyOklzc3VlQ29tbWVudDMzMTY4NjAzOA== | hrishikeshac 6334793 | 2017-09-24T04:14:00Z | 2017-09-24T04:14:00Z | NONE | FYI @shoyer @fmaussion , I had to revisit the problem and ended up writing a function to compute vectorized cross-correlation, covariance, regression calculations (along with p-value and standard error) for xr.DataArrays. Essentially, I tried to mimic scipy.stats.linregress() but for multi-dimensional data, and included the ability to compute lagged relationships. Here's the function and its demonstration; please feel free to incorporate it in xarray if deemed useful: https://hrishichandanpurkar.blogspot.com/2017/09/vectorized-functions-for-correlation.html |
{ "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
Feature request: Compute cross-correlation (similar to pd.Series.corr()) of gridded data 188996339 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3