home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

11 rows where author_association = "MEMBER" and issue = 532940062 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • dcherian 5
  • max-sixty 5
  • fujiisoup 1

issue 1

  • Add DataArray.pad, Dataset.pad, Variable.pad · 11 ✖

author_association 1

  • MEMBER · 11 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
601282600 https://github.com/pydata/xarray/pull/3596#issuecomment-601282600 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDYwMTI4MjYwMA== max-sixty 5635139 2020-03-19T16:36:55Z 2020-03-19T16:36:55Z MEMBER

Thanks @mark-boer ! Must be one of the largest first contributions...

+1 re merge + experimental warning; maybe we should do this more often. Cheers @dcherian

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
601217610 https://github.com/pydata/xarray/pull/3596#issuecomment-601217610 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDYwMTIxNzYxMA== dcherian 2448579 2020-03-19T14:41:39Z 2020-03-19T14:41:39Z MEMBER

Merging. I've added an experimental warning to the docstrings and we can discuss the IndexVariable situation here: https://github.com/pydata/xarray/issues/3868

Thanks @mark-boer this is a major contribution for your first PR!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
598730380 https://github.com/pydata/xarray/pull/3596#issuecomment-598730380 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5ODczMDM4MA== max-sixty 5635139 2020-03-13T13:52:13Z 2020-03-13T13:52:13Z MEMBER

Agree!

We could add an "Experimental" label and then worry less about future changes

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
598717521 https://github.com/pydata/xarray/pull/3596#issuecomment-598717521 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5ODcxNzUyMQ== dcherian 2448579 2020-03-13T13:23:37Z 2020-03-13T13:23:37Z MEMBER

Extremely good points @mark-boer

I propose we merge and open an issue to decide what to do with IndexVariables.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
596837204 https://github.com/pydata/xarray/pull/3596#issuecomment-596837204 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5NjgzNzIwNA== max-sixty 5635139 2020-03-10T00:08:11Z 2020-03-10T00:08:11Z MEMBER

In some instances extrapolating all coords, can lead to some unwanted behaviour. Would you suggest we only interpolate the indexes?

How would we handle unsorted indexes?

How would we extrapolate all the different kind of indexes, like the MultiIndex or CategoricalIndex?

I agree; I can't see easy solutions to these. If there are evenly spaced indexes (e.g. dates, grids), then it's easy to know what to do. But there are plenty of times it's difficult, if not intractable.

One option to merge something useful without resolving these questions is to return Variables only and label this method experimental. I think this is immediately useful for the cases where these difficult questions don't need to be answered.

Of course if there are good answers to the questions, then even better!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
596363935 https://github.com/pydata/xarray/pull/3596#issuecomment-596363935 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5NjM2MzkzNQ== dcherian 2448579 2020-03-09T07:10:40Z 2020-03-09T07:10:40Z MEMBER

Hmm, I don't really see a solution. What do you suggest?

:) I think we need to extrapolate indexes by default. It seems like the most sensible option.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
595506745 https://github.com/pydata/xarray/pull/3596#issuecomment-595506745 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5NTUwNjc0NQ== max-sixty 5635139 2020-03-06T00:04:03Z 2020-03-06T02:08:02Z MEMBER

I do agree it can be confusing, but it is not unique in xarray. Dataset.shift only shifts data_vars, bfill and ffill only fill data_vars, etc.

I agree, I wouldn't have expected coords to be included given existing behavior. People can switch coords <> data_vars as needed, so there's an escape hatch

Edit: But it's more awkward for indexes than non-index coords. The index becomes less useful with non-unique values, and generally indexes don't have nulls. I'm not sure what the other options would be: to some extent it's the intersection of pad with xarray's data model.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
595244925 https://github.com/pydata/xarray/pull/3596#issuecomment-595244925 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5NTI0NDkyNQ== max-sixty 5635139 2020-03-05T14:02:45Z 2020-03-05T14:02:45Z MEMBER

This looks excellent @mark-boer , thank you!

I will try and have a proper look through today (but don't wait for me)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
595202061 https://github.com/pydata/xarray/pull/3596#issuecomment-595202061 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU5NTIwMjA2MQ== dcherian 2448579 2020-03-05T12:22:02Z 2020-03-05T12:22:02Z MEMBER

I pushed some minor changes.

I think this is ready to go in.

The big outstanding issue is what to do about dimension coordinates or indexes. Currently this PR treats all variables in coords different from those in data_vars. I think this is confusing.

I am thinking that we want to use linear extrapolation for IndexVariables by default and apply the same padding mode to all other variables. The reasoning being that IndexVariables with NaNs are hard to deal with and it's hard to fill them in: padded["x"] = padded.x.drop_vars("x").interpolate_na("x", fill_value="extrapolate")

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
563311262 https://github.com/pydata/xarray/pull/3596#issuecomment-563311262 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU2MzMxMTI2Mg== dcherian 2448579 2019-12-09T16:11:46Z 2019-12-09T16:11:46Z MEMBER

It seems like we have some value mismatches on dask==1.2. @fujiisoup is this the error you found?

``` =================================== FAILURES =================================== _ TestVariableWithDask.testpad[xr_arg0-np_arg0-linear_ramp] __

self = <xarray.tests.test_variable.TestVariableWithDask object at 0x7fb69b629b00> mode = 'linear_ramp', xr_arg = {'x': (2, 1)}, np_arg = ((2, 1), (0, 0), (0, 0))

@pytest.mark.parametrize(
    "mode",
    [
        pytest.param("mean", marks=pytest.mark.xfail),
        pytest.param("median", marks=pytest.mark.xfail),
        pytest.param("reflect", marks=pytest.mark.xfail),
        "edge",
        "linear_ramp",
        "maximum",
        "minimum",
        "symmetric",
        "wrap",
    ],
)
@pytest.mark.parametrize(
    "xr_arg, np_arg",
    [
        [{"x": (2, 1)}, ((2, 1), (0, 0), (0, 0))],
        [{"y": (0, 3)}, ((0, 0), (0, 3), (0, 0))],
        [{"x": (3, 1), "z": (2, 0)}, ((3, 1), (0, 0), (2, 0))],
    ],
)
def test_pad(self, mode, xr_arg, np_arg):
    data = np.arange(4 * 3 * 2).reshape(4, 3, 2)
    v = self.cls(["x", "y", "z"], data)

    actual = v.pad(mode=mode, **xr_arg)
    expected = np.pad(data, np_arg, mode=mode,)
  assert_array_equal(actual, expected)

E AssertionError: E Arrays are not equal E
E (mismatch 2.3809523809523796%) E x: array([[[ 0, 0], E [ 0, 0], E [ 0, 0]],... E y: array([[[ 0, 0], E [ 0, 0], E [ 0, 0]],...

xarray/tests/test_variable.py:821: AssertionError _ TestVariableWithDask.testpad[xr_arg1-np_arg1-linear_ramp] __

self = <xarray.tests.test_variable.TestVariableWithDask object at 0x7fb69df9eb00> mode = 'linear_ramp', xr_arg = {'y': (0, 3)}, np_arg = ((0, 0), (0, 3), (0, 0))

@pytest.mark.parametrize(
    "mode",
    [
        pytest.param("mean", marks=pytest.mark.xfail),
        pytest.param("median", marks=pytest.mark.xfail),
        pytest.param("reflect", marks=pytest.mark.xfail),
        "edge",
        "linear_ramp",
        "maximum",
        "minimum",
        "symmetric",
        "wrap",
    ],
)
@pytest.mark.parametrize(
    "xr_arg, np_arg",
    [
        [{"x": (2, 1)}, ((2, 1), (0, 0), (0, 0))],
        [{"y": (0, 3)}, ((0, 0), (0, 3), (0, 0))],
        [{"x": (3, 1), "z": (2, 0)}, ((3, 1), (0, 0), (2, 0))],
    ],
)
def test_pad(self, mode, xr_arg, np_arg):
    data = np.arange(4 * 3 * 2).reshape(4, 3, 2)
    v = self.cls(["x", "y", "z"], data)

    actual = v.pad(mode=mode, **xr_arg)
    expected = np.pad(data, np_arg, mode=mode,)
  assert_array_equal(actual, expected)

E AssertionError: E Arrays are not equal E
E (mismatch 16.66666666666667%) E x: array([[[ 0, 1], E [ 2, 3], E [ 4, 5],... E y: array([[[ 0, 1], E [ 2, 3], E [ 4, 5],...

xarray/tests/test_variable.py:821: AssertionError _ TestVariableWithDask.testpad[xr_arg2-np_arg2-linear_ramp] __

self = <xarray.tests.test_variable.TestVariableWithDask object at 0x7fb69c609860> mode = 'linear_ramp', xr_arg = {'x': (3, 1), 'z': (2, 0)} np_arg = ((3, 1), (0, 0), (2, 0))

@pytest.mark.parametrize(
    "mode",
    [
        pytest.param("mean", marks=pytest.mark.xfail),
        pytest.param("median", marks=pytest.mark.xfail),
        pytest.param("reflect", marks=pytest.mark.xfail),
        "edge",
        "linear_ramp",
        "maximum",
        "minimum",
        "symmetric",
        "wrap",
    ],
)
@pytest.mark.parametrize(
    "xr_arg, np_arg",
    [
        [{"x": (2, 1)}, ((2, 1), (0, 0), (0, 0))],
        [{"y": (0, 3)}, ((0, 0), (0, 3), (0, 0))],
        [{"x": (3, 1), "z": (2, 0)}, ((3, 1), (0, 0), (2, 0))],
    ],
)
def test_pad(self, mode, xr_arg, np_arg):
    data = np.arange(4 * 3 * 2).reshape(4, 3, 2)
    v = self.cls(["x", "y", "z"], data)

    actual = v.pad(mode=mode, **xr_arg)
    expected = np.pad(data, np_arg, mode=mode,)
  assert_array_equal(actual, expected)

E AssertionError: E Arrays are not equal E
E (mismatch 5.208333333333329%) E x: array([[[ 0, 0, 0, 0], E [ 0, 0, 0, 0], E [ 0, 0, 0, 0]],... E y: array([[[ 0, 0, 0, 0], E [ 0, 0, 0, 0], E [ 0, 0, 0, 0]],... ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062
562821225 https://github.com/pydata/xarray/pull/3596#issuecomment-562821225 https://api.github.com/repos/pydata/xarray/issues/3596 MDEyOklzc3VlQ29tbWVudDU2MjgyMTIyNQ== fujiisoup 6815844 2019-12-07T06:47:32Z 2019-12-07T06:47:32Z MEMBER

Hi, @mark-boer. In #3587, I tried using dask's pad method but noticed a few bugs in older (but newer than 1.2) dask. For me, it would be very welcome if you add this method to dask_array_compat. Then, I would wait for merging #3587 until this PR is completed.

Thanks for your contribution :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add DataArray.pad, Dataset.pad, Variable.pad 532940062

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.5ms · About: xarray-datasette