home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

18 rows where type = "pull" and user = 90008 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 17
  • open 1

type 1

  • pull · 18 ✖

repo 1

  • xarray 18
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2129180716 PR_kwDOAMm_X85mld8X 8736 Make list_chunkmanagers more resilient to broken entrypoints hmaarrfk 90008 closed 0     6 2024-02-11T21:37:38Z 2024-03-13T17:54:02Z 2024-03-13T17:54:02Z CONTRIBUTOR   0 pydata/xarray/pulls/8736

As I'm a developing my custom chunk manager, I'm often checking out between my development branch and production branch breaking the entrypoint.

This made xarray impossible to import unless I re-ran pip install -e . -vv which is somewhat tiring.

This should help xarray be more resilient in other software's bugs in case they install malformed entrypoints

Example:

```python

from xarray.core.parallelcompat import list_chunkmanagers

list_chunkmanagers() <ipython-input-3-19326f4950bc>:1: UserWarning: Failed to load entrypoint MyChunkManager due to No module named 'my.array._chunkmanager'. Skipping. list_chunkmanagers() {'dask': <xarray.core.daskmanager.DaskManager at 0x7f5b826231c0>} ```

Thank you for considering.

  • [x] Closes #xxxx
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst

This is mostly a quality of life thing for developers, I don't see this as a user visible change.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8736/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2131345470 PR_kwDOAMm_X85ms1Q6 8738 Don't break users that were already using ChunkManagerEntrypoint hmaarrfk 90008 closed 0     1 2024-02-13T02:17:55Z 2024-02-13T15:37:54Z 2024-02-13T03:21:32Z CONTRIBUTOR   0 pydata/xarray/pulls/8738

For example, you just broke cubed

https://github.com/xarray-contrib/cubed-xarray/blob/main/cubed_xarray/cubedmanager.py#L15

Not sure how much you care, it didn't seem like anybody other than me ever tried this module on github...

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8738/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2131364916 PR_kwDOAMm_X85ms5QB 8739 Add a test for usability of duck arrays with chunks property hmaarrfk 90008 open 0     1 2024-02-13T02:46:47Z 2024-02-13T03:35:24Z   CONTRIBUTOR   0 pydata/xarray/pulls/8739

xref: https://github.com/pydata/xarray/issues/8733

```python xarray/tests/test_variable.py F ================================================ FAILURES ================================================ ____________________________ TestAsCompatibleData.test_duck_array_with_chunks ____________________________ self = <xarray.tests.test_variable.TestAsCompatibleData object at 0x7f3d1b122e60> def test_duck_array_with_chunks(self): # Non indexable type class CustomArray(NDArrayMixin, indexing.ExplicitlyIndexed): def __init__(self, array): self.array = array @property def chunks(self): return self.shape def __array_function__(self, *args, **kwargs): return NotImplemented def __array_ufunc__(self, *args, **kwargs): return NotImplemented array = CustomArray(np.arange(3)) assert is_chunked_array(array) var = Variable(dims=("x"), data=array) > var.load() /home/mark/git/xarray/xarray/tests/test_variable.py:2745: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/mark/git/xarray/xarray/core/variable.py:936: in load self._data = to_duck_array(self._data, **kwargs) /home/mark/git/xarray/xarray/namedarray/pycompat.py:129: in to_duck_array chunkmanager = get_chunked_array_type(data) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (CustomArray(array=array([0, 1, 2])),), chunked_arrays = [CustomArray(array=array([0, 1, 2]))] chunked_array_types = {<class 'xarray.tests.test_variable.TestAsCompatibleData.test_duck_array_with_chunks.<locals>.CustomArray'>} chunkmanagers = {'dask': <xarray.namedarray.daskmanager.DaskManager object at 0x7f3d1b1568f0>} def get_chunked_array_type(*args: Any) -> ChunkManagerEntrypoint[Any]: """ Detects which parallel backend should be used for given set of arrays. Also checks that all arrays are of same chunking type (i.e. not a mix of cubed and dask). """ # TODO this list is probably redundant with something inside xarray.apply_ufunc ALLOWED_NON_CHUNKED_TYPES = {int, float, np.ndarray} chunked_arrays = [ a for a in args if is_chunked_array(a) and type(a) not in ALLOWED_NON_CHUNKED_TYPES ] # Asserts all arrays are the same type (or numpy etc.) chunked_array_types = {type(a) for a in chunked_arrays} if len(chunked_array_types) > 1: raise TypeError( f"Mixing chunked array types is not supported, but received multiple types: {chunked_array_types}" ) elif len(chunked_array_types) == 0: raise TypeError("Expected a chunked array but none were found") # iterate over defined chunk managers, seeing if each recognises this array type chunked_arr = chunked_arrays[0] chunkmanagers = list_chunkmanagers() selected = [ chunkmanager for chunkmanager in chunkmanagers.values() if chunkmanager.is_chunked_array(chunked_arr) ] if not selected: > raise TypeError( f"Could not find a Chunk Manager which recognises type {type(chunked_arr)}" E TypeError: Could not find a Chunk Manager which recognises type <class 'xarray.tests.test_variable.TestAsCompatibleData.test_duck_array_with_chunks.<locals>.CustomArray'> /home/mark/git/xarray/xarray/namedarray/parallelcompat.py:158: TypeError ============================================ warnings summary ============================================ xarray/testing/assertions.py:9 /home/mark/git/xarray/xarray/testing/assertions.py:9: DeprecationWarning: Pyarrow will become a required dependency of pandas in the next major release of pandas (pandas 3.0), (to allow more performant data types, such as the Arrow string type, and better interoperability with other libraries) but was not found to be installed on your system. If this would cause problems for you, please provide us feedback at https://github.com/pandas-dev/pandas/issues/54466 import pandas as pd -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ======================================== short test summary info ========================================= FAILED xarray/tests/test_variable.py::TestAsCompatibleData::test_duck_array_with_chunks - TypeError: Could not find a Chunk Manager which recognises type <class 'xarray.tests.test_variable.Te... ====================================== 1 failed, 1 warning in 0.77s ====================================== (dev) ✘-1 ~/git/xarray [add_test_for_duck_array|✔] ``` </details>
  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8739/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2034395026 PR_kwDOAMm_X85hnUnc 8534 Point users to where in their code they should make mods for Dataset.dims hmaarrfk 90008 closed 0     8 2023-12-10T14:31:29Z 2023-12-10T18:50:10Z 2023-12-10T18:23:42Z CONTRIBUTOR   0 pydata/xarray/pulls/8534

Its somewhat annoying to get warnings that point to a line within a library where the warning is issued. It really makes it unclear what one needs to change.

This points to the user's access of the dims attribute.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8534/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1731320789 PR_kwDOAMm_X85Rougi 7883 Avoid one call to len when getting ndim of Variables hmaarrfk 90008 closed 0     3 2023-05-29T23:37:10Z 2023-07-03T15:44:32Z 2023-07-03T15:44:31Z CONTRIBUTOR   0 pydata/xarray/pulls/7883

I admit this is a super micro optimization but it avoids in certain cases the creation of a tuple, and a call to len on it.

I hit this as I was trying to understand why Variable indexing was so much slower than numpy indexing. It seems that bounds checking in python is just slower than in C.

Feel free to close this one if you don't want this kind of optimization.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7883/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
690546795 MDExOlB1bGxSZXF1ZXN0NDc3NDIwMTkz 4400 [WIP] Support nano second time encoding. hmaarrfk 90008 closed 0     10 2020-09-02T00:16:04Z 2023-03-26T20:59:00Z 2023-03-26T20:08:50Z CONTRIBUTOR   0 pydata/xarray/pulls/4400

Not too sure i have the bandwidth to complete this seeing as cftime and datetime don't have nanoseconds, but maybe it can help somebody.

  • [x] Closes #4183
  • [x] Tests added
  • [ ] Passes isort . && black . && mypy . && flake8
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4400/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1475567394 PR_kwDOAMm_X85ESe3u 7356 Avoid loading entire dataset by getting the nbytes in an array hmaarrfk 90008 closed 0     14 2022-12-05T03:29:53Z 2023-03-17T17:31:22Z 2022-12-12T16:46:40Z CONTRIBUTOR   0 pydata/xarray/pulls/7356

Using .data accidentally tries to load the whole lazy arrays into memory.

Sad.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7356/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
689502005 MDExOlB1bGxSZXF1ZXN0NDc2NTM3Mzk3 4395 WIP: Ensure that zarr.ZipStores are closed hmaarrfk 90008 closed 0     4 2020-08-31T20:57:49Z 2023-01-31T21:39:15Z 2023-01-31T21:38:23Z CONTRIBUTOR   0 pydata/xarray/pulls/4395

ZipStores aren't always closed making it hard to use them as fluidly as regular zarr stores.

  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Passes isort . && black . && mypy . && flake8 # master doesn't pass black
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4395/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1468595351 PR_kwDOAMm_X85D6oci 7334 Remove code used to support h5py<2.10.0 hmaarrfk 90008 closed 0     1 2022-11-29T19:34:24Z 2022-11-30T23:30:41Z 2022-11-30T23:30:41Z CONTRIBUTOR   0 pydata/xarray/pulls/7334

It seems that the relevant issue was fixed in 2.10.0 https://github.com/h5py/h5py/commit/466181b178c1b8a5bfa6fb8f217319e021f647e0

I'm not sure how far back you want to fix things. I'm hoping to test this on the CI.

I found this since I've been auditing slowdowns in our codebase, which has caused me to review much of the reading pipeline.

Do you want to add a test for h5py>=2.10.0? Or can we assume that users won't install things together. https://pypi.org/project/h5py/2.10.0/

I could for example set the backend to not be available if a version of h5py that is too old is detected. One could alternatively, just keep the code here.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7334/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1428274982 PR_kwDOAMm_X85BzXXR 7236 Expand benchmarks for dataset insertion and creation hmaarrfk 90008 closed 0     8 2022-10-29T13:55:19Z 2022-10-31T15:04:13Z 2022-10-31T15:03:58Z CONTRIBUTOR   0 pydata/xarray/pulls/7236

Taken from discussions in https://github.com/pydata/xarray/issues/7224#issuecomment-1292216344

Thank you @Illviljan

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7236/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1428264468 PR_kwDOAMm_X85BzVOE 7235 Fix type in benchmarks/merge.py hmaarrfk 90008 closed 0     0 2022-10-29T13:28:12Z 2022-10-29T15:52:45Z 2022-10-29T15:52:45Z CONTRIBUTOR   0 pydata/xarray/pulls/7235

I don't think this affects what is displayed that is determined by param_names

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7235/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1423321834 PR_kwDOAMm_X85Bi5BN 7222 Actually make the fast code path return early for Aligner.align hmaarrfk 90008 closed 0     6 2022-10-26T01:59:09Z 2022-10-28T16:22:36Z 2022-10-28T16:22:35Z CONTRIBUTOR   0 pydata/xarray/pulls/7222

In relation to my other PR.

Without this PR

With the early return

Removing the frivolous copy (does not pass tests) ![image](https://user-images.githubusercontent.com/90008/197916632-dbc89c21-94a9-4b92-af11-5b1fa5f5cddd.png)
Code for benchmark ```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 # Everybody is lazy loading now, so lets force modules to get instantiated dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() # tqdm = iter for i in tqdm(range(N)): time_start = perf_counter() dataset[f"var{i}"] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start # %% from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel("Number of existing variables") plt.ylabel("Time to add a variables (ms)") plt.ylim([0, 10]) plt.grid(True) ```

xref: https://github.com/pydata/xarray/pull/7221

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7222/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1423312198 PR_kwDOAMm_X85Bi3Dp 7221 Remove debugging slow assert statement hmaarrfk 90008 closed 0     13 2022-10-26T01:43:08Z 2022-10-28T02:49:44Z 2022-10-28T02:49:44Z CONTRIBUTOR   0 pydata/xarray/pulls/7221

We've been trying to understand why our code is slow. One part is that we use xarray.Datasets almost like dictionaries for our data. The following code is quite common for us

python import xarray as xr dataset = xr.Dataset() dataset['a'] = 1 dataset['b'] = 2

However, through benchmarks, it became obvious that the merge_core method of xarray was causing alot of slowdowns. main branch:

With this merge request:

```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np

N = 1000

Everybody is lazy loading now, so lets force modules to get instantiated

dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset

time_elapsed = np.zeros(N) dataset = xr.Dataset()

for i in tqdm(range(N)): time_start = perf_counter() dataset[f"var{i}"] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start

%%

from matplotlib import pyplot as plt

plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel("Number of existing variables") plt.ylabel("Time to add a variables (ms)") plt.ylim([0, 50]) plt.grid(True) ```

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7221/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 2,
    "eyes": 0
}
    xarray 13221727 pull
1423916687 PR_kwDOAMm_X85Bk2By 7223 Dataset insertion benchmark hmaarrfk 90008 closed 0     2 2022-10-26T12:09:14Z 2022-10-27T15:38:09Z 2022-10-27T15:38:09Z CONTRIBUTOR   0 pydata/xarray/pulls/7223

xref: https://github.com/pydata/xarray/pull/7221

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7223/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1410575877 PR_kwDOAMm_X85A4LHp 7172 Lazy import dask.distributed to reduce import time of xarray hmaarrfk 90008 closed 0     9 2022-10-16T18:25:31Z 2022-10-18T17:41:50Z 2022-10-18T17:06:34Z CONTRIBUTOR   0 pydata/xarray/pulls/7172

I was auditing the import time of my software and found that distributed added a non insignificant amount of time to the import of xarray:

Using tuna, one can find that the following are sources of delay in import time for xarray:

To audit, one can use the the command python -X importtime -c "import numpy as np; import pandas as pd; import dask.array; import xarray as xr" 2>import.log && tuna import.lo The command as is, breaks out the import time of numpy, pandas, and dask.array to allow you to focus on "other" costs within xarray. Main branch:

Proposed:

One would be tempted to think that this is due to xarray.testing and xarray.tutorial but those just move the imports one level down in tuna graphs.

  • [x] ~~Closes~~
  • [x] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7172/reactions",
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 3,
    "eyes": 0
}
    xarray 13221727 pull
1098924491 PR_kwDOAMm_X84wyU7M 6154 Use base ImportError not MoudleNotFoundError when testing for plugins hmaarrfk 90008 closed 0     4 2022-01-11T09:48:36Z 2022-01-11T10:28:51Z 2022-01-11T10:24:57Z CONTRIBUTOR   0 pydata/xarray/pulls/6154

Admittedly i had a pretty broken environment (I manually uninstalled C dependencies for python packages installed with conda), but I still expected xarray to "work" with a different backend.

I hope the comments in the code explain why ImportError is preferred to ModuleNotFoundError.

Thank you for considering.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6154/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
587398134 MDExOlB1bGxSZXF1ZXN0MzkzMzQ5NzIx 3888 [WIP] [DEMO] Add tests for ZipStore for zarr hmaarrfk 90008 closed 0     6 2020-03-25T02:29:20Z 2020-03-26T04:23:05Z 2020-03-25T21:57:09Z CONTRIBUTOR   0 pydata/xarray/pulls/3888
  • [ ] Related to #3815
  • [ ] Tests added
  • [ ] Passes isort -rc . && black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3888/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
347712372 MDExOlB1bGxSZXF1ZXN0MjA2MjQ3MjE4 2344 FutureWarning: creation of DataArrays w/ coords Dataset hmaarrfk 90008 closed 0     7 2018-08-05T16:34:59Z 2018-08-06T16:02:09Z 2018-08-06T16:02:09Z CONTRIBUTOR   0 pydata/xarray/pulls/2344

Previously, this would raise a:

FutureWarning: iteration over an xarray.Dataset will change in xarray v0.11 to only include data variables, not coordinates. Iterate over the Dataset.variables property instead to preserve existing behavior in a forwards compatible manner.

  • [ ] Closes #xxxx (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [ ] Tests added (for all bug fixes or enhancements)
  • [ ] Tests passed (for all non-documentation changes)
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2344/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1518.688ms · About: xarray-datasette