home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

15 rows where user = 953992 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 6

  • add rolling_apply method or function 5
  • Basic multiIndex support and stack/unstack methods 3
  • Towards a (temporary?) workaround for datetime issues at the xarray-level 3
  • BUG: Dataset.from_dataframe() losing dims? 2
  • Better support for batched/out-of-core computation 1
  • BUG: not converting datetime64[ns] with tz from pandas.Series 1

user 1

  • jreback · 15 ✖

author_association 1

  • MEMBER 15
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
276719997 https://github.com/pydata/xarray/issues/1084#issuecomment-276719997 https://api.github.com/repos/pydata/xarray/issues/1084 MDEyOklzc3VlQ29tbWVudDI3NjcxOTk5Nw== jreback 953992 2017-02-01T17:18:17Z 2017-02-01T17:18:17Z MEMBER

@spencerahill as I said above, you should not need to subclass at all, just define a new frequency,

maybe something like Month30 or somesuch, which then will slot right into PeriodIndex

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Towards a (temporary?) workaround for datetime issues at the xarray-level 187591179
276168323 https://github.com/pydata/xarray/issues/1084#issuecomment-276168323 https://api.github.com/repos/pydata/xarray/issues/1084 MDEyOklzc3VlQ29tbWVudDI3NjE2ODMyMw== jreback 953992 2017-01-30T19:43:09Z 2017-01-30T19:43:09Z MEMBER

@jhamman you just need a different frequency, in fact this one is pretty close: https://github.com/pandas-dev/pandas/blob/master/pandas/tseries/offsets.py#L2257

just a matter of defining a fixed-day month frequency (numpy has this by default anyhow). PeriodIndex would then happily take this.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Towards a (temporary?) workaround for datetime issues at the xarray-level 187591179
275969458 https://github.com/pydata/xarray/issues/1084#issuecomment-275969458 https://api.github.com/repos/pydata/xarray/issues/1084 MDEyOklzc3VlQ29tbWVudDI3NTk2OTQ1OA== jreback 953992 2017-01-30T02:42:58Z 2017-01-30T02:42:58Z MEMBER

just my 2c here. You are going to end up writing a huge amount of code to re-implement essentially PeriodIndex. not really sure why you are going down this path.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Towards a (temporary?) workaround for datetime issues at the xarray-level 187591179
171503989 https://github.com/pydata/xarray/pull/702#issuecomment-171503989 https://api.github.com/repos/pydata/xarray/issues/702 MDEyOklzc3VlQ29tbWVudDE3MTUwMzk4OQ== jreback 953992 2016-01-14T02:13:04Z 2016-01-14T02:13:04Z MEMBER

makes sense about dask.array.dropna

though I think you should dropna if at all possible (or have an option at least)

it IS a bit suprising to get back the full index not sure how common that will be in practice esp if u r stacking multiple levels

finally - think about only supporting sequential stacking as it conceptually makes more sense

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Basic multiIndex support and stack/unstack methods 124700322
171422543 https://github.com/pydata/xarray/pull/702#issuecomment-171422543 https://api.github.com/repos/pydata/xarray/issues/702 MDEyOklzc3VlQ29tbWVudDE3MTQyMjU0Mw== jreback 953992 2016-01-13T20:26:03Z 2016-01-13T20:26:14Z MEMBER

hmm, is dask.array dropna not implemented? I don't see why it couldn't conceptually be done (though a bit unfamiliar with the impl) - set_index takes 'data' and makes it an 'index', so that is orthogonal. It would make a new Coordinate. reset_index would do the converse. - stack/unstack effectively take existing Coordinates and transform between them.

ok makes sense.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Basic multiIndex support and stack/unstack methods 124700322
171298177 https://github.com/pydata/xarray/pull/702#issuecomment-171298177 https://api.github.com/repos/pydata/xarray/issues/702 MDEyOklzc3VlQ29tbWVudDE3MTI5ODE3Nw== jreback 953992 2016-01-13T13:58:57Z 2016-01-13T13:58:57Z MEMBER

couple of comments: - I think the repr, though technically accurate, is a bit misleading. lists of tuples is really only useful as a MI, so why not actually indicate that - stack/unstack (as in [9]) is not idempotent, as you are reconstituting the full cartesian product of levels. This seems a bit odd though (pandas can do this because its is separately tracking what is actually in the index, via the labels), I don't think you have this though? - these ops are really analogs of set_index/reset_index, rather than stack/unstack, so might be a bit confusing (though I think I get why you are doing it this way), it makes more sense esp for multi-dim. Maybe explain this in the pandas guide?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Basic multiIndex support and stack/unstack methods 124700322
168675157 https://github.com/pydata/xarray/issues/701#issuecomment-168675157 https://api.github.com/repos/pydata/xarray/issues/701 MDEyOklzc3VlQ29tbWVudDE2ODY3NTE1Nw== jreback 953992 2016-01-04T13:21:16Z 2016-01-04T13:21:16Z MEMBER

yeh, this is fine. maybe just note which dtypes are lossless and which are not. Yeah if you store things as Index objects, then this would go away.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: not converting datetime64[ns] with tz from pandas.Series 124685682
168565401 https://github.com/pydata/xarray/issues/699#issuecomment-168565401 https://api.github.com/repos/pydata/xarray/issues/699 MDEyOklzc3VlQ29tbWVudDE2ODU2NTQwMQ== jreback 953992 2016-01-04T02:05:33Z 2016-01-04T02:05:33Z MEMBER

ok, closing.

also FYI, these seem reasonable as a default.

``` In [9]: p = tm.makePanel()

In [10]: p Out[10]: <class 'pandas.core.panel.Panel'> Dimensions: 3 (items) x 30 (major_axis) x 4 (minor_axis) Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-02-11 00:00:00 Minor_axis axis: A to D

In [11]: p.to_xray() Out[11]: <xray.Dataset> Dimensions: (items: 3, major_axis: 30, minor_axis: 4) Coordinates: * items (items) object 'ItemA' 'ItemB' 'ItemC' * major_axis (major_axis) datetime64[ns] 2000-01-03 2000-01-04 2000-01-05 ... * minor_axis (minor_axis) object 'A' 'B' 'C' 'D' Data variables: None (items, major_axis, minor_axis) float64 -0.5374 0.5918 ...

In [12]: p = tm.makePanel4D()

In [13]: p Out[13]: <class 'pandas.core.panelnd.Panel4D'> Dimensions: 3 (labels) x 3 (items) x 30 (major_axis) x 4 (minor_axis) Labels axis: l1 to l3 Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-02-11 00:00:00 Minor_axis axis: A to D

In [14]: p.to_xray() Out[14]: <xray.Dataset> Dimensions: (items: 3, labels: 3, major_axis: 30, minor_axis: 4) Coordinates: * labels (labels) object 'l1' 'l2' 'l3' * items (items) object 'ItemA' 'ItemB' 'ItemC' * major_axis (major_axis) datetime64[ns] 2000-01-03 2000-01-04 2000-01-05 ... * minor_axis (minor_axis) object 'A' 'B' 'C' 'D' Data variables: None (labels, items, major_axis, minor_axis) float64 -0.5523 ... ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: Dataset.from_dataframe() losing dims? 124664101
168563184 https://github.com/pydata/xarray/issues/699#issuecomment-168563184 https://api.github.com/repos/pydata/xarray/issues/699 MDEyOklzc3VlQ29tbWVudDE2ODU2MzE4NA== jreback 953992 2016-01-04T01:33:31Z 2016-01-04T01:33:31Z MEMBER

ahh I c, so this is actually a 1-dim (len of 3), ok.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  BUG: Dataset.from_dataframe() losing dims? 124664101
159757720 https://github.com/pydata/xarray/issues/641#issuecomment-159757720 https://api.github.com/repos/pydata/xarray/issues/641 MDEyOklzc3VlQ29tbWVudDE1OTc1NzcyMA== jreback 953992 2015-11-25T23:47:08Z 2015-11-25T23:47:08Z MEMBER

yep, agreed. anyhow I created a new issue for it https://github.com/pydata/pandas/issues/11704

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add rolling_apply method or function 113499493
159756318 https://github.com/pydata/xarray/issues/641#issuecomment-159756318 https://api.github.com/repos/pydata/xarray/issues/641 MDEyOklzc3VlQ29tbWVudDE1OTc1NjMxOA== jreback 953992 2015-11-25T23:43:03Z 2015-11-25T23:43:03Z MEMBER

it's not how it's implemented

that is MUCH slower that marginal calculations

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add rolling_apply method or function 113499493
159755572 https://github.com/pydata/xarray/issues/641#issuecomment-159755572 https://api.github.com/repos/pydata/xarray/issues/641 MDEyOklzc3VlQ29tbWVudDE1OTc1NTU3Mg== jreback 953992 2015-11-25T23:37:24Z 2015-11-25T23:37:24Z MEMBER

right, I think I will open a new issue for that. its actually a bit tricky as the iteration is done in cython itself, and its a marginal calculation anyhow (e.g. you just keep adding the new value, subtracting values that fall off the window).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add rolling_apply method or function 113499493
159754015 https://github.com/pydata/xarray/issues/641#issuecomment-159754015 https://api.github.com/repos/pydata/xarray/issues/641 MDEyOklzc3VlQ29tbWVudDE1OTc1NDAxNQ== jreback 953992 2015-11-25T23:24:09Z 2015-11-25T23:24:09Z MEMBER

ohh, @shoyer you are thinking about defining __iter__ on the Rolling, for a custom aggregation? or other reason

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add rolling_apply method or function 113499493
159753832 https://github.com/pydata/xarray/issues/641#issuecomment-159753832 https://api.github.com/repos/pydata/xarray/issues/641 MDEyOklzc3VlQ29tbWVudDE1OTc1MzgzMg== jreback 953992 2015-11-25T23:22:51Z 2015-11-25T23:22:51Z MEMBER

@shoyer breath holding :) https://github.com/pydata/pandas/pull/11603

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add rolling_apply method or function 113499493
45769786 https://github.com/pydata/xarray/issues/79#issuecomment-45769786 https://api.github.com/repos/pydata/xarray/issues/79 MDEyOklzc3VlQ29tbWVudDQ1NzY5Nzg2 jreback 953992 2014-06-11T17:03:05Z 2014-06-11T17:03:05Z MEMBER

FYI in the pointed to PR joblib does work (w/o dill actually). but IPython.parallel still is not working how I want it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better support for batched/out-of-core computation 29921033

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.292ms · About: xarray-datasette