home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

6 rows where comments = 11, repo = 13221727 and user = 5635139 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 4
  • issue 2

state 1

  • closed 6

repo 1

  • xarray · 6 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2034575163 PR_kwDOAMm_X85hn4Pn 8539 Filter out doctest warning max-sixty 5635139 closed 0     11 2023-12-10T23:11:36Z 2023-12-12T06:37:54Z 2023-12-11T21:00:01Z MEMBER   0 pydata/xarray/pulls/8539

Trying to fix #8537. Not sure it'll work and can't test locally so seeing if it passes CI

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8539/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
729117202 MDU6SXNzdWU3MjkxMTcyMDI= 4539 Failing main branch — test_save_mfdataset_compute_false_roundtrip max-sixty 5635139 closed 0     11 2020-10-25T21:22:36Z 2023-09-21T06:48:03Z 2023-09-20T19:57:17Z MEMBER      

We had the main branch passing for a while, but unfortunately another test failure. Now in our new Linux py38-backend-api-v2 test case, intest_save_mfdataset_compute_false_roundtrip

link

``` self = <xarray.tests.test_backends.TestDask object at 0x7f821a0d6190>

def test_save_mfdataset_compute_false_roundtrip(self):
    from dask.delayed import Delayed

    original = Dataset({"foo": ("x", np.random.randn(10))}).chunk()
    datasets = [original.isel(x=slice(5)), original.isel(x=slice(5, 10))]
    with create_tmp_file(allow_cleanup_failure=ON_WINDOWS) as tmp1:
        with create_tmp_file(allow_cleanup_failure=ON_WINDOWS) as tmp2:
            delayed_obj = save_mfdataset(
                datasets, [tmp1, tmp2], engine=self.engine, compute=False
            )
            assert isinstance(delayed_obj, Delayed)
            delayed_obj.compute()
            with open_mfdataset(
                [tmp1, tmp2], combine="nested", concat_dim="x"
            ) as actual:
              assert_identical(actual, original)

E AssertionError: Left and right Dataset objects are not identical E
E
E Differing data variables: E L foo (x) float64 dask.array<chunksize=(5,), meta=np.ndarray> E R foo (x) float64 dask.array<chunksize=(10,), meta=np.ndarray>

/home/vsts/work/1/s/xarray/tests/test_backends.py:3274: AssertionError

AssertionError: Left and right Dataset objects are not identical

Differing data variables: L foo (x) float64 dask.array<chunksize=(5,), meta=np.ndarray> R foo (x) float64 dask.array<chunksize=(10,), meta=np.ndarray> ```

@aurghs & @alexamici — are you familiar with this? Thanks in advance

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4539/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
515662368 MDExOlB1bGxSZXF1ZXN0MzM1MDg1NTM4 3475 drop_vars; deprecate drop for variables max-sixty 5635139 closed 0     11 2019-10-31T18:46:48Z 2019-11-07T23:20:40Z 2019-11-07T20:13:51Z MEMBER   0 pydata/xarray/pulls/3475

Introduces drop_vars, and deprecates using drop for variables. drop is widely used for the deprecated case, so this is a fairly wide blast radius.

It's more churn than is ideal, but I do think it's a much better API.

This is ready for review, though I'm sure I'm missed references in the docs etc (took my peak regex skills to find/replace only the deprecated usages!)

Originally discussed here

  • [x] Tests added
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3475/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
469871658 MDExOlB1bGxSZXF1ZXN0Mjk4OTk4MTE2 3142 Black max-sixty 5635139 closed 0     11 2019-07-18T16:35:05Z 2019-08-08T20:55:14Z 2019-08-08T20:54:34Z MEMBER   0 pydata/xarray/pulls/3142

From https://github.com/pydata/xarray/issues/3092

  • [x] Reformat code
  • [x] CI checks
  • [x] Short instructions for how to merge an existing PR (i.e. avoid manual merge resolution)
  • Not sure if there's magic here - I think people would just have to format their code and then hope git can resolve (i.e. because there would still be no common parent)?
  • [x] Black badge
  • [x] Do we want to keep flake8 checks? Black is mostly stricter but not always, e.g. on lines at the end of files. (+0.3 from me to use only black and stop using flake8)

~- [ ] Do we want to include isort? (-0.1 from me, even though I like the tool)~

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3142/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
160505403 MDU6SXNzdWUxNjA1MDU0MDM= 884 Iterating over a Dataset iterates only over its data_vars max-sixty 5635139 closed 0   0.11 2856429 11 2016-06-15T19:35:50Z 2018-10-25T15:26:59Z 2018-10-25T15:26:59Z MEMBER      

This has been a small-but-persistent issue for me for a while. I suspect that my perspective might be dependent on my current outlook, but socializing it here to test if it's secular...

Currently Dataset.keys() returns both variables and coordinates (but not its attrs keys):

python In [5]: ds=xr.Dataset({'a': (('x', 'y'), np.random.rand(10,2))}) In [12]: list(ds.keys()) Out[12]: ['a', 'x', 'y']

Is this conceptually correct? I would posit that a Dataset is a mapping of keys to variables, and the coordinates contain values that label that data.

So should Dataset.keys() instead return just the keys of the Variables?

We're often passing around a dataset as a Mapping of keys to values - but then when we run a function across each of the keys, we get something run on both the Variables' keys, and the Coordinate / label's keys.

In Pandas, DataFrame.keys() returns just the columns, so that conforms to what we need. While I think the xarray design is in general much better in these areas, this is one area that pandas seems to get correct - and because of the inconsistency between pandas & xarray, we're having to coerce our objects to pandas DataFrames before passing them off to functions that pull out their keys (this is also why we can't just look at ds.data_vars.keys() - because it breaks that duck-typing).

Does that make sense?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/884/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
121601010 MDExOlB1bGxSZXF1ZXN0NTMzMzI4MDY= 677 Dataset constructor can take pandas objects max-sixty 5635139 closed 0     11 2015-12-10T23:22:32Z 2016-01-02T07:44:26Z 2016-01-02T07:34:50Z MEMBER   0 pydata/xarray/pulls/677

Closes a 'first-step' of https://github.com/xray/xray/issues/676. Works only for simple, non-MultiIndexed, pandas objects.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/677/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 48.878ms · About: xarray-datasette