home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 60303760 and user = 7504461 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • saulomeirelles · 2 ✖

issue 1

  • pd.Grouper support? · 2 ✖

author_association 1

  • NONE 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
231021167 https://github.com/pydata/xarray/issues/364#issuecomment-231021167 https://api.github.com/repos/pydata/xarray/issues/364 MDEyOklzc3VlQ29tbWVudDIzMTAyMTE2Nw== saulomeirelles 7504461 2016-07-07T08:54:46Z 2016-07-07T08:59:15Z NONE

Thanks, @shoyer !

Here is an example of how I circumvented the problem:

data = np.random.rand(24*5) times = pd.date_range('2000-01-01', periods=24*5, freq='H') foo = xray.DataArray(data, coords=[times], dims=['time']) foo = foo.to_dataset(dim=foo.dims,name='foo')

T = time.mktime( dt.datetime(1970,1,1,12+1,25,12).timetuple() ) # 12.42 hours Tint = [ int( time.mktime( t.timetuple() ) / T ) for t in foo.time.values.astype('datetime64[s]').tolist()] foo2 = xray.DataArray( Tint, coords=foo.time.coords, dims=foo.time.dims) foo.merge(foo2.to_dataset(name='Tint'), inplace=True)

foo_grp = foo.groupby('Tint')

foo_grp.group.plot.line()

In my case, the dataset is quite large then it costed a lot of computational time to merge the new variable Tint.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pd.Grouper support? 60303760
228723336 https://github.com/pydata/xarray/issues/364#issuecomment-228723336 https://api.github.com/repos/pydata/xarray/issues/364 MDEyOklzc3VlQ29tbWVudDIyODcyMzMzNg== saulomeirelles 7504461 2016-06-27T11:45:09Z 2016-06-27T11:45:09Z NONE

This is a very useful functionality. I am wondering if I can specify the time window, for example, like ds.groupby(time=pd.TimeGrouper('12.42H')). Is there a way to do that in xarray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pd.Grouper support? 60303760

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 17.187ms · About: xarray-datasette