home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where issue = 166439490 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • crusaderky · 7 ✖

issue 1

  • unstack() sorts data alphabetically · 7 ✖

author_association 1

  • MEMBER 7
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
269479071 https://github.com/pydata/xarray/issues/906#issuecomment-269479071 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDI2OTQ3OTA3MQ== crusaderky 6213168 2016-12-28T13:46:19Z 2016-12-28T13:46:19Z MEMBER

@shoyer, are you happy for me to go ahead and change unstack() to respect the order of the first found series?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490
234687071 https://github.com/pydata/xarray/issues/906#issuecomment-234687071 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDIzNDY4NzA3MQ== crusaderky 6213168 2016-07-23T00:27:49Z 2016-07-23T00:27:49Z MEMBER

Thanks, didn't know

https://gist.github.com/crusaderky/002ba64ee270164931d32ea3366dce1f

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490
234686438 https://github.com/pydata/xarray/issues/906#issuecomment-234686438 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDIzNDY4NjQzOA== crusaderky 6213168 2016-07-23T00:20:41Z 2016-07-23T00:20:41Z MEMBER

Fixed in attachment. The code uses the first found series as the order.

proper_unstack.zip

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490
234004910 https://github.com/pydata/xarray/issues/906#issuecomment-234004910 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDIzNDAwNDkxMA== crusaderky 6213168 2016-07-20T16:33:15Z 2016-07-20T16:33:15Z MEMBER

I see. I'll see if I can think a good way to cope with your two examples. BTW, my code above is buggy as it blindly assumes that the first dim is also the outermost.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490
233904555 https://github.com/pydata/xarray/issues/906#issuecomment-233904555 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDIzMzkwNDU1NQ== crusaderky 6213168 2016-07-20T09:52:42Z 2016-07-20T09:52:42Z MEMBER

This preamble should be integrated inside unstack():

``` python import operator from functools import reduce

def proper_unstack(array, dim):

# Regenerate Pandas multi-index to be ordered by appearance
# TODO: check that the stacked coords repeat periodically
# TODO: write a faster/cleaner algorithm using numpy
mindex = array.coords[dim].to_pandas().index

levels = []
labels = []
for dim_i, (levels_i, labels_i) in enumerate(zip(mindex.levels, mindex.labels)):
    step_inner = reduce(operator.mul, (len(lvl) for lvl in mindex.levels[dim_i + 1:]), 1)
    step_outer = reduce(operator.mul, (len(lvl) for lvl in mindex.levels[:dim_i]), 1)

    levels.append([levels_i[labels_i[j]] for j in range(0, levels_i.size * step_inner, step_inner)])
    labels.append(reduce(operator.add, ([j] * step_inner for j in range(levels_i.size))) * step_outer)

mindex = pandas.MultiIndex(levels, labels, names=mindex.names)
array = array.copy()
array.coords[dim] = mindex
return array.unstack(dim)

proper_unstack(a, 'dim_0') ```

<xarray.DataArray (x: 2, count: 4)> array([[0, 1, 2, 3], [4, 5, 6, 7]]) Coordinates: * x (x) object 'x1' 'x0' * count (count) object 'first' 'second' 'third' 'fourth'

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490
233888081 https://github.com/pydata/xarray/issues/906#issuecomment-233888081 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDIzMzg4ODA4MQ== crusaderky 6213168 2016-07-20T08:42:19Z 2016-07-20T08:42:19Z MEMBER

the order of appearance should be what dictates the output.

in the worst case (e.g., random order for the MultiIndex) we'll have this issue no matter what rule we pick for assigning unstacked coordinates.

Not true. Using the order of appearance requires you to do a pick-by-index on the index. At the moment, you're doing a pick-by-index on the data.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490
233794061 https://github.com/pydata/xarray/issues/906#issuecomment-233794061 https://api.github.com/repos/pydata/xarray/issues/906 MDEyOklzc3VlQ29tbWVudDIzMzc5NDA2MQ== crusaderky 6213168 2016-07-19T23:11:57Z 2016-07-19T23:11:57Z MEMBER

this workaround works:

python index2 = pandas.MultiIndex( levels=[['x0', 'x1'], ['first', 'second', 'third', 'fourth']], labels=[[0,0,0,0,1,1,1,1], [0,1,2,3,0,1,2,3]], names=['x', 'count']) xarray.DataArray(pandas.Series(list(range(8)), index2)).unstack('dim_0')

<xarray.DataArray (x: 2, count: 4)> array([[0, 1, 2, 3], [4, 5, 6, 7]], dtype=int64) Coordinates: * x (x) object 'x0' 'x1' * count (count) object 'first' 'second' 'third' 'fourth'

However, I think that the whole thing is incredibly convoluted. Namely, because everything looks good both if you visualize the original pandas Series/DataFrame, as well as the stacked DataArray. unstack() is causing an internal technicality of pandas to produce real change in the data.

I came through this issue because I am using pandas to load a multi-index CSV from disk, and then convert it to a n-dimensional xarray. In this situation, I have no control over the multiindex - short of manually rebuilding it after the CSV load. The pandas dataframe looks right, the stacked xarray looks right, the unstacked xarray gets magically sorted :$

Also I don't understand why you say there's no performance implications. You're basically doing a pick-by-index rebuild of the array, which does potentially random access to the whole input array - thus nullifying the benefits of the CPU cache. This is compared to a numpy.ndarray.reshape(), which has the cost of a memcpy().

I was going to add something about doing pick-by-index with a dask array will be even worse, when I realised that multiindex does not work at all when you chunk()... :(

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  unstack() sorts data alphabetically 166439490

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 22.915ms · About: xarray-datasette