home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 370183554 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • HandmannP 3
  • dcherian 1
  • stale[bot] 1

author_association 2

  • NONE 4
  • MEMBER 1

issue 1

  • gridding data with groupby_bins in 2 dim · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
703276522 https://github.com/pydata/xarray/issues/2488#issuecomment-703276522 https://api.github.com/repos/pydata/xarray/issues/2488 MDEyOklzc3VlQ29tbWVudDcwMzI3NjUyMg== dcherian 2448579 2020-10-04T16:01:53Z 2020-10-04T16:01:53Z MEMBER

This will be addressed as part of multi-variable groupby.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  gridding data with groupby_bins in 2 dim 370183554
703222102 https://github.com/pydata/xarray/issues/2488#issuecomment-703222102 https://api.github.com/repos/pydata/xarray/issues/2488 MDEyOklzc3VlQ29tbWVudDcwMzIyMjEwMg== stale[bot] 26384082 2020-10-04T08:32:37Z 2020-10-04T08:32:37Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  gridding data with groupby_bins in 2 dim 370183554
430232155 https://github.com/pydata/xarray/issues/2488#issuecomment-430232155 https://api.github.com/repos/pydata/xarray/issues/2488 MDEyOklzc3VlQ29tbWVudDQzMDIzMjE1NQ== HandmannP 16838898 2018-10-16T13:13:37Z 2018-10-16T13:13:37Z NONE

I am open for suggestions to get the code running faster :D

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  gridding data with groupby_bins in 2 dim 370183554
430232027 https://github.com/pydata/xarray/issues/2488#issuecomment-430232027 https://api.github.com/repos/pydata/xarray/issues/2488 MDEyOklzc3VlQ29tbWVudDQzMDIzMjAyNw== HandmannP 16838898 2018-10-16T13:13:13Z 2018-10-16T13:13:13Z NONE

%%time def group_lat(x): # x is a DataFrame of group values # now find the value of the longitude box to append to the dictionary key value = np.ones(1) value[0] = x.lon.mean() idx = (np.abs(lon_cent - value)).argmin() lokey = lon_cent[idx] # longitude value of the box

# compute groups for the latitude
y = x.groupby_bins('lat', lat_bin, labels=lat_cent)
y = dict(y)
# replace the old key with the new key: lon,lat
key = np.asarray((list(y.keys()))) # get dict keys as array
newkey = np.stack((np.ones(len(key))*lokey,key),axis=1)
newkey = tuple(newkey.tolist())
key = tuple(y.keys()) # get dict keys as list

for i in range(len(key)):
    y[tuple(newkey[i])] = y[key[i]]
    del y[key[i]]  
return y

geop_mean = geop1.groupby_bins('lon', lon_bin, labels=lon_cent).apply(group_lat)

geop_mean = geop1.groupby_bins('lon', lon_bin, labels=lon_cent) geop_mean = dict(geop_mean)

group into lat boxes

l = 0 geo_grid = dict()

for x in list(geop_mean.keys()): y = group_lat(geop_mean[x]) if l == 0: geo_grid = y else: geo_grid.update(y) l += 1

Now the data is sorted into boxes and still contains all metadata

Now get the mean values for each box

l = 0 m = np.zeros((len(tuple(geo_grid.keys())),4)) d = np.asarray(list(geo_grid.keys()))

gp = xr.Dataset({'geopot': (['lat','lon'], np.ones((lat_cent.shape[0], lon_cent.shape[0]))), 'z': (['lat','lon'], np.ones((lat_cent.shape[0], lon_cent.shape[0])))}, coords={'lon': (['lon'],lon_cent), 'lat': (['lat'],lat_cent)})

for k in range(d.shape[0]): e = tuple(d[k]) #m[l,2] = geo_grid[e].z.mean() gp['geopot'].loc[dict(lat=d[k][1], lon=d[k][0])] = geo_grid[e].geopot.mean() gp['z'].loc[dict(lat=d[k][1], lon=d[k][0])] = geo_grid[e].z.mean() #gp.loc[dict(lat=m[0,1], lon=m[0,0])] l +=1

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  gridding data with groupby_bins in 2 dim 370183554
430231535 https://github.com/pydata/xarray/issues/2488#issuecomment-430231535 https://api.github.com/repos/pydata/xarray/issues/2488 MDEyOklzc3VlQ29tbWVudDQzMDIzMTUzNQ== HandmannP 16838898 2018-10-16T13:11:44Z 2018-10-16T13:12:36Z NONE

I wrote a work around for my purpose but I guess I could still be faster ...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  gridding data with groupby_bins in 2 dim 370183554

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.883ms · About: xarray-datasette