home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

11 rows where author_association = "CONTRIBUTOR" and issue = 618828102 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • AndrewILWilliams 11

issue 1

  • Auto chunk · 11 ✖

author_association 1

  • CONTRIBUTOR · 11 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
633710066 https://github.com/pydata/xarray/pull/4064#issuecomment-633710066 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYzMzcxMDA2Ng== AndrewILWilliams 56925856 2020-05-25T20:38:49Z 2020-05-25T20:38:49Z CONTRIBUTOR

No problem ! Thanks everyone for helping me get up to speed :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
632128807 https://github.com/pydata/xarray/pull/4064#issuecomment-632128807 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYzMjEyODgwNw== AndrewILWilliams 56925856 2020-05-21T14:49:37Z 2020-05-21T14:49:37Z CONTRIBUTOR

@keewis thanks for this! I've added what I think is a suitable test for DataArrays, do you think it's also a good idea to have a DataSet test?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
632090407 https://github.com/pydata/xarray/pull/4064#issuecomment-632090407 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYzMjA5MDQwNw== AndrewILWilliams 56925856 2020-05-21T13:36:41Z 2020-05-21T13:36:41Z CONTRIBUTOR

This could test that dataarray.chunk("auto").data is the same as dataarray.data.rechunk("auto") (or something like that).

@dcherian Thanks for the tip:) Quick question: Is there a reason why you're specifying the .data here? Also I think I'm missing something because I don't get what the difference between .chunk() and .rechunk() would be in this case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
632035116 https://github.com/pydata/xarray/pull/4064#issuecomment-632035116 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYzMjAzNTExNg== AndrewILWilliams 56925856 2020-05-21T11:30:01Z 2020-05-21T11:30:01Z CONTRIBUTOR

Cheers! I forgot about the tests, will add them this week or next hopefully

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629390609 https://github.com/pydata/xarray/pull/4064#issuecomment-629390609 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTM5MDYwOQ== AndrewILWilliams 56925856 2020-05-15T17:40:39Z 2020-05-15T17:41:25Z CONTRIBUTOR

@dcherian do you have any idea about this mypy Type error? I can't find much (accessible) documentation on how the Union[] is working in this context.

xarray/core/dataset.py:1737: error: Argument 2 to "fromkeys" of "dict" has incompatible type "Union[Number, Mapping[Hashable, Union[None, Number, Tuple[Number, ...]]]]"; expected "Union[None, Number, Tuple[Number, ...]]" xarray/core/dataset.py:1740: error: Item "Number" of "Union[Number, Mapping[Hashable, Union[None, Number, Tuple[Number, ...]]]]" has no attribute "keys"

Edit: thanks to everyone for your help so far!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629346101 https://github.com/pydata/xarray/pull/4064#issuecomment-629346101 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTM0NjEwMQ== AndrewILWilliams 56925856 2020-05-15T16:11:04Z 2020-05-15T16:22:24Z CONTRIBUTOR

Okay so I've traced the error back to the map_blocks() function. I don't fully understand the code for this function in xarray/core/parallel.py, but here's a quick report on the different behaviours.

Normally, when using the make_ds() and make_da() functions in test_dask.py, without any changes to ds.chunk() we have:

```python

def func(obj): ... result = obj + obj.x + 5 * obj.y ... return result ... xr.map_blocks(func, ds).unify_chunks().chunks Frozen(SortedKeysDict({'x': (4, 4, 2), 'y': (5, 5, 5, 5), 'z': (4,)})) func(ds).chunk().unify_chunks().chunks Frozen(SortedKeysDict({'x': (4, 4, 2), 'y': (5, 5, 5, 5), 'z': (4,)})) ```

However, when I use the changes I've made to dataset.py (changing isinstance(chunks, Number) to is_scalar(chunks)), the behaviour becomes:

```python

xr.map_blocks(func, ds).unify_chunks().chunks Frozen(SortedKeysDict({'x': (4, 4, 2), 'y': (5, 5, 5, 5), 'z': (4,)})) func(ds).chunk().unify_chunks().chunks Frozen(SortedKeysDict({'x': (10,), 'y': (20,), 'z': (4,)})) ```

Which means that it now fails the test_map_blocks() call in test_dask.py line 1077.

I've tried to follow through the code and see what is actually happening when this change is made, but I'm out of my depth here. My guess is that is_scalar(chunks) is giving the wrong behaviour when chunks=None ?

Edit: I think that's the problem!
```python

isinstance(None, numbers.Number) False is_scalar(None) True ```

I'll add in something to catch Nones and see if it fixes the error...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629197362 https://github.com/pydata/xarray/pull/4064#issuecomment-629197362 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTE5NzM2Mg== AndrewILWilliams 56925856 2020-05-15T12:05:22Z 2020-05-15T12:05:22Z CONTRIBUTOR

No unpushed commits

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629191037 https://github.com/pydata/xarray/pull/4064#issuecomment-629191037 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTE5MTAzNw== AndrewILWilliams 56925856 2020-05-15T11:49:23Z 2020-05-15T11:49:23Z CONTRIBUTOR

Do you mean the master merge? If that's wrong would you be able to fix it for me? My bad, hopefully i'll be able to do it more cleanly in future

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629168282 https://github.com/pydata/xarray/pull/4064#issuecomment-629168282 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTE2ODI4Mg== AndrewILWilliams 56925856 2020-05-15T10:49:43Z 2020-05-15T10:49:43Z CONTRIBUTOR

Okay, that makes sense. Though, it seems that I forked the master branch before @kmuehlbauer's commit, which fixed this flake8 issue? So I think I need to make a new fork?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629154336 https://github.com/pydata/xarray/pull/4064#issuecomment-629154336 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTE1NDMzNg== AndrewILWilliams 56925856 2020-05-15T10:15:50Z 2020-05-15T10:17:38Z CONTRIBUTOR

Okay cheers both! I'll have a look at these now.

@keewis sorry I'm still getting used to using this side of Git at the moment, could you clarify what you mean by merge master ? Do you mean merge with my local master?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102
629147818 https://github.com/pydata/xarray/pull/4064#issuecomment-629147818 https://api.github.com/repos/pydata/xarray/issues/4064 MDEyOklzc3VlQ29tbWVudDYyOTE0NzgxOA== AndrewILWilliams 56925856 2020-05-15T10:00:35Z 2020-05-15T10:01:43Z CONTRIBUTOR

In my git clone, when I run the flake8 and black . tests, I get the following messages.

(xarray-tests) Andrews-MacBook-Pro-2:xarray andrewwilliams$ black . All done! ✨ 🍰 ✨ 143 files left unchanged. (xarray-tests) Andrews-MacBook-Pro-2:xarray andrewwilliams$ flake8 ./xarray/backends/memory.py:43:32: E741 ambiguous variable name 'l' ./xarray/backends/common.py:244:32: E741 ambiguous variable name 'l' ./xarray/backends/.ipynb_checkpoints/memory-checkpoint.py:43:32: E741 ambiguous variable name 'l'

I'm not sure why something has changed in these files (I haven't touched them), I also can't work out what the l variable is meant to be doing there.

Could this somehow be associated with loads of the checks failing below? Thanks! :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Auto chunk 618828102

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.637ms · About: xarray-datasette