home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

28 rows where comments = 0, state = "open" and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: draft, created_at (date), updated_at (date)

type 2

  • issue 20
  • pull 8

state 1

  • open · 28 ✖

repo 1

  • xarray 28
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2278510478 PR_kwDOAMm_X85uhIGP 8998 Zarr: Optimize appending dcherian 2448579 open 0     0 2024-05-03T22:21:44Z 2024-05-03T22:23:34Z   MEMBER   1 pydata/xarray/pulls/8998

Builds on #8997

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8998/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2187743087 PR_kwDOAMm_X85ptH1f 8840 Grouper, Resampler as public api dcherian 2448579 open 0     0 2024-03-15T05:16:05Z 2024-04-21T16:21:34Z   MEMBER   1 pydata/xarray/pulls/8840

Expose Grouper and Resampler as public API

TODO: - [ ] Consider avoiding IndexVariable


  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8840/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2228319306 I_kwDOAMm_X86E0XRK 8914 swap_dims does not propagate indexes properly dcherian 2448579 open 0     0 2024-04-05T15:36:26Z 2024-04-05T15:36:27Z   MEMBER      

What happened?

Found by hypothesis ``` import xarray as xr import numpy as np

var = xr.Variable(dims="2", data=np.array(['1970-01-01T00:00:00.000000000', '1970-01-01T00:00:00.000000002', '1970-01-01T00:00:00.000000001'], dtype='datetime64[ns]')) var1 = xr.Variable(data=np.array([0], dtype=np.uint32), dims=['1'], attrs={})

state = xr.Dataset() state['2'] = var state = state.stack({"0": ["2"]}) state['1'] = var1 state['1_'] = var1#.copy(deep=True) state = state.swap_dims({"1": "1_"}) xr.testing.assertions._assert_internal_invariants(state, False) ```

This swaps simple pandas indexed dims, but the multi-index that is in the dataset and not affected by the swap_dims op ends up broken.

cc @benbovy

What did you expect to happen?

No response

Minimal Complete Verifiable Example

No response

MVCE confirmation

  • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [ ] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

No response

Environment

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8914/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
2021856935 PR_kwDOAMm_X85g81gb 8509 Proof of concept - public Grouper objects dcherian 2448579 open 0     0 2023-12-02T04:52:27Z 2024-03-15T05:18:18Z   MEMBER   1 pydata/xarray/pulls/8509

Not for merging, just proof that it can be done nicely :)

Now builds on #8840 ~Builds on an older version of #8507~

Try it out!

```python import xarray as xr from xarray.core.groupers import SeasonGrouper, SeasonResampler

ds = xr.tutorial.open_dataset("air_temperature")

custom seasons!

ds.air.groupby(time=SeasonGrouper(["JF", "MAM", "JJAS", "OND"])).mean()

ds.air.resample(time=SeasonResampler(["DJF", "MAM", "JJAS", "ON"])).count() ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8509/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2052952379 I_kwDOAMm_X856XZE7 8568 Raise when assigning attrs to virtual variables (default coordinate arrays) dcherian 2448579 open 0     0 2023-12-21T19:24:11Z 2023-12-21T19:24:19Z   MEMBER      

Discussed in https://github.com/pydata/xarray/discussions/8567

<sup>Originally posted by **matthew-brett** December 21, 2023</sup> Sorry for the introductory question, but we (@ivanov and I) ran into this behavior while experimenting: ```python import numpy as np data = np.zeros((3, 4, 5)) ds = xr.DataArray(data, dims=('i', 'j', 'k')) print(ds['k'].attrs) ``` This shows `{}` as we might reasonably expect. But then: ```python ds['k'].attrs['foo'] = 'bar' print(ds['k'].attrs) ``` This also gives `{}`, which we found surprising. We worked out why that was, after a little experimentation (the default coordinate arrays seems to get created on the fly and garbage collected immediately). But it took us a little while. Is that as intended? Is there a way of making this less confusing? Thanks for any help.
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8568/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1954809370 I_kwDOAMm_X850hAYa 8353 Update benchmark suite for asv 0.6.1 dcherian 2448579 open 0     0 2023-10-20T18:13:22Z 2023-12-19T05:53:21Z   MEMBER      

The new asv version comes with decorators for parameterizing and skipping, and the ability to use mamba to create environments.

https://github.com/airspeed-velocity/asv/releases

https://asv.readthedocs.io/en/v0.6.1/writing_benchmarks.html#skipping-benchmarks

This might help us reduce benchmark times a bit, or at least simplify the code some.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8353/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1975400777 PR_kwDOAMm_X85efqSl 8408 Generalize explicit_indexing_adapter dcherian 2448579 open 0     0 2023-11-03T03:29:40Z 2023-11-03T03:53:25Z   MEMBER   1 pydata/xarray/pulls/8408

Use as_indexable instead of NumpyIndexingAdapter

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8408/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1942893480 I_kwDOAMm_X85zzjOo 8306 keep_attrs for NamedArray dcherian 2448579 open 0     0 2023-10-14T02:29:54Z 2023-10-14T02:31:35Z   MEMBER      

What is your issue?

Copying over @max-sixty's comment from https://github.com/pydata/xarray/pull/8304#discussion_r1358873522

I haven't been in touch with the NameArray discussions so forgive a glib comment — but re https://github.com/pydata/xarray/issues/3891 — this would be a "once-in-a-library" opportunity to always retain attrs in aggregations, removing the keep_attrs option in methods.

(Xarray could still handle them as it wished, so xarray's external interface wouldn't need to change immediately...)

@pydata/xarray Should we just delete the keep_attrs kwarg completely for NamedArray and always propagate attrs? obj.attrs.clear() seems just as easy to type.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8306/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1902086612 PR_kwDOAMm_X85aoYuf 8206 flox: Set fill_value=np.nan always. dcherian 2448579 open 0     0 2023-09-19T02:19:49Z 2023-09-19T02:23:26Z   MEMBER   1 pydata/xarray/pulls/8206
  • [x] Closes #8090
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8206/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1888576440 I_kwDOAMm_X85wkWO4 8162 Update group by multi index dcherian 2448579 open 0     0 2023-09-09T04:50:29Z 2023-09-09T04:50:39Z   MEMBER      

ideally GroupBy._infer_concat_args() would return a xr.Coordinates object that contains both the coordinate(s) and their (multi-)index to assign to the result (combined) object.

The goal is to avoid calling create_default_index_implicit(coord) below where coord is a pd.MultiIndex or a single IndexVariable wrapping a multi-index. If coord is a Coordinates object, we could do combined = combined.assign_coords(coord) instead.

https://github.com/pydata/xarray/blob/e2b6f3468ef829b8a83637965d34a164bf3bca78/xarray/core/groupby.py#L1573-L1587

There are actually more general issues:

  • The group parameter of Dataset.groupby being a single variable or variable name, it won't be possible to do groupby on a full pandas multi-index once we drop its dimension coordinate (#8143). How can we still support it? Maybe passing a dimension name to group and check that there's only one index for that dimension?
  • How can we support custom, multi-coordinate indexes with groupby? I don't have any practical example in mind, but in theory just passing a single coordinate name as group will invalidate the index. Should we drop the index in the result? Or, like suggested above pass a dimension name as group and check the index?

Originally posted by @benbovy in https://github.com/pydata/xarray/issues/8140#issuecomment-1709775666

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8162/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1824824446 I_kwDOAMm_X85sxJx- 8025 Support Groupby first, last with flox dcherian 2448579 open 0     0 2023-07-27T17:07:51Z 2023-07-27T19:08:06Z   MEMBER      

Is your feature request related to a problem?

flox recently added support for first, last, nanfirst, nanlast. So we should support that on the Xarray GroupBy object.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8025/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1700678362 PR_kwDOAMm_X85QBdXY 7828 GroupBy: Fix reducing by subset of grouper dims dcherian 2448579 open 0     0 2023-05-08T18:00:54Z 2023-05-10T02:41:39Z   MEMBER   1 pydata/xarray/pulls/7828
  • [x] Tests added

Fixes yet another bug with GroupBy reductions. We weren't assigning the group index when reducing by a subset of dimensions present on the grouper

This will only pass when flox 0.7.1 reaches conda-forge.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7828/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1649611456 I_kwDOAMm_X85iUxLA 7704 follow upstream scipy interpolation improvements dcherian 2448579 open 0     0 2023-03-31T15:46:56Z 2023-03-31T15:46:56Z   MEMBER      

Is your feature request related to a problem?

Scipy 1.10.0 has some great improvements to interpolation (release notes) particularly around the fancier methods like pchip.

It'd be good to see if we can simplify some of our code (or even enable using these options).

Describe the solution you'd like

No response

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7704/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
802525282 MDExOlB1bGxSZXF1ZXN0NTY4NjUzOTg0 4868 facets and hue with hist dcherian 2448579 open 0     0 2021-02-05T22:49:36Z 2022-10-19T07:27:32Z   MEMBER   0 pydata/xarray/pulls/4868
  • [x] Closes #4288
  • [ ] Tests added
  • [x] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4868/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
802431534 MDExOlB1bGxSZXF1ZXN0NTY4NTc1NzIw 4866 Refactor line plotting dcherian 2448579 open 0     0 2021-02-05T19:51:24Z 2022-10-18T20:13:14Z   MEMBER   0 pydata/xarray/pulls/4866

Refactors line plotting to use a _plot1d decorator.

Next i'll use this decorator on hist so we can "facet" and "hue" histograms.

see #4288

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4866/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1378174355 I_kwDOAMm_X85SJUWT 7055 Use roundtrip context manager in distributed write tests dcherian 2448579 open 0     0 2022-09-19T15:53:40Z 2022-09-19T15:53:40Z   MEMBER      

What is your issue?

File roundtripping tests in test_distributed.py don't use the roundtrip context manager (thpugh one uses create_tmp_file) so I don't think any created files are being cleaned up.

Example: https://github.com/pydata/xarray/blob/09e467a6a3a8ed68c6c29647ebf2b09288145da1/xarray/tests/test_distributed.py#L91-L119

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7055/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1203414243 I_kwDOAMm_X85HuqTj 6481 refactor broadcast for flexible indexes dcherian 2448579 open 0     0 2022-04-13T14:51:19Z 2022-04-13T14:51:28Z   MEMBER      

What is your issue?

From @benbovy in https://github.com/pydata/xarray/pull/6477

  • extract common indexes and explicitly pass them to the Dataset and DataArray constructors (when implemented) that are called in the broadcast helper functions (there are some temporary and ugly hacks in create_default_index_implicit so that it works now with pandas multi-indexes wrapped in coordinate variables without the need to pass those indexes explicitly)
  • extract common indexes based on the dimension(s) of their coordinates and not their name (e.g., case of non-dimension but indexed coordinate)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6481/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1194790343 I_kwDOAMm_X85HNw3H 6445 map removes non-dimensional coordinate variables dcherian 2448579 open 0     0 2022-04-06T15:40:40Z 2022-04-06T15:40:40Z   MEMBER      

What happened?

python ds = xr.Dataset( {"a": ("x", [1, 2, 3])}, coords={"c": ("x", [1, 2, 3]), "d": ("y", [1, 2, 3, 4])} ) print(ds.coords) mapped = ds.map(lambda x: x) print(mapped.coords)

Variables d gets dropped in the map call. It does not share any dimensions with any of the data variables. Coordinates: c (x) int64 1 2 3 d (y) int64 1 2 3 4 Coordinates: c (x) int64 1 2 3

What did you expect to happen?

No response

Minimal Complete Verifiable Example

No response

Relevant log output

No response

Anything else we need to know?

No response

Environment

xarray 2022.03.0

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6445/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1171916710 I_kwDOAMm_X85F2gem 6372 apply_ufunc + dask="parallelized" + no core dimensions should raise a nicer error about core dimensions being absent dcherian 2448579 open 0     0 2022-03-17T04:25:37Z 2022-03-17T05:10:16Z   MEMBER      

What happened?

From https://github.com/pydata/xarray/discussions/6370

Calling apply_ufunc(..., dask="parallelized") with no core dimensions and dask input "works" but raises an error on compute (ValueError: axes don't match array from np.transpose).

python xr.apply_ufunc( lambda x: np.mean(x), dt, dask="parallelized" )

What did you expect to happen?

With numpy data the apply_ufunc call does raise an error:

xr.apply_ufunc( lambda x: np.mean(x), dt.compute(), dask="parallelized" )

ValueError: applied function returned data with unexpected number of dimensions. Received 0 dimension(s) but expected 1 dimensions with names: ('x',)

Minimal Complete Verifiable Example

``` python import xarray as xr

dt = xr.Dataset( data_vars=dict( value=(["x"], [1,1,2,2,2,3,3,3,3,3]), ), coords=dict( lon=(["x"], np.linspace(0,1,10)), ), ).chunk(chunks={'x': tuple([2,3,5])}) # three chunks of different size

xr.apply_ufunc( lambda x: np.mean(x), dt, dask="parallelized" ) ```

Relevant log output

No response

Anything else we need to know?

No response

Environment

N/A

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6372/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1048856436 I_kwDOAMm_X84-hEd0 5962 Test resampling with dask arrays dcherian 2448579 open 0     0 2021-11-09T17:02:45Z 2021-11-09T17:02:45Z   MEMBER      

I noticed that we don't test resampling with dask arrays (well just one).

This could be a good opportunity to convert test_groupby.py to use test fixtures like in https://github.com/pydata/xarray/pull/5411

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5962/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1043846371 I_kwDOAMm_X84-N9Tj 5934 add test for custom backend entrypoint dcherian 2448579 open 0     0 2021-11-03T16:57:14Z 2021-11-03T16:57:21Z   MEMBER      

From https://github.com/pydata/xarray/pull/5931

It would be good to add a test checking that custom backend entrypoints work. This might involve creating a dummy package that registers an entrypoint (https://github.com/pydata/xarray/pull/5931#issuecomment-959131968)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5934/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
938141608 MDU6SXNzdWU5MzgxNDE2MDg= 5582 Faster unstacking of dask arrays dcherian 2448579 open 0     0 2021-07-06T18:12:05Z 2021-07-06T18:54:40Z   MEMBER      

Recent dask version support assigning to a list of ints along one dimension. we can use this for unstacking (diff builds on #5577)

```diff diff --git i/xarray/core/variable.py w/xarray/core/variable.py index 222e8dab9..a50dfc574 100644 --- i/xarray/core/variable.py +++ w/xarray/core/variable.py @@ -1593,11 +1593,9 @@ class Variable(AbstractArray, NdimSizeLenMixin, VariableArithmetic): else: dtype = self.dtype

  • if sparse:
  • if sparse and not is_duck_dask_array(reordered): # unstacking a dense multitindexed array to a sparse array
  • Use the sparse.COO constructor until sparse supports advanced indexing

  • https://github.com/pydata/sparse/issues/114

  • TODO: how do we allow different sparse array types

  • Use the sparse.COO constructor since we cannot assign to sparse.COO

         from sparse import COO
    
         codes = zip(*index.codes)
    

    @@ -1618,19 +1616,23 @@ class Variable(AbstractArray, NdimSizeLenMixin, VariableArithmetic): )

     else:
    
    • dask supports assigning to a list of ints along one axis only.

    • So we construct an array with the last dimension flattened,

    • assign the values, then reshape to the final shape.

    • intermediate_shape = reordered.shape[:-1] + (np.prod(new_dim_sizes),)
    • indexer = np.ravel_multi_index(index.codes, new_dim_sizes) data = np.full_like( self.data, fill_value=fill_value,
    • shape=new_shape,
    • shape=intermediate_shape, dtype=dtype, )

       # Indexer is a list of lists of locations. Each list is the locations
       # on the new dimension. This is robust to the data being sparse; in that
       # case the destinations will be NaN / zero.
      
      • sparse doesn't support item assigment,

      • https://github.com/pydata/sparse/issues/114

      • data[(..., *indexer)] = reordered
      • data[(..., indexer)] = reordered
      • data = data.reshape(new_shape)

      return self._replace(dims=new_dims, data=data) ```

This should be what alignment.reindex_variables is doing but I don't fully understand that function.

The annoying bit is figuring out when to use this version and what to do with things like dask wrapping sparse. I think we want to loop over each variable in Dataset.unstack calling Variable.unstack and dispatch based on the type of Variable.data to easily handle all the edge cases.

cc @Illviljan if you're interested in implementing this

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5582/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
798586325 MDU6SXNzdWU3OTg1ODYzMjU= 4852 mention HDF files in docs dcherian 2448579 open 0     0 2021-02-01T18:05:23Z 2021-07-04T01:24:22Z   MEMBER      

This is such a common question that we should address it in the docs.

Just saying that some hdf5 files can be opened with h5netcdf, and that the user needs to manually create xarray objects with everything else should be enough.

https://xarray.pydata.org/en/stable/io.html

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4852/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
797053785 MDU6SXNzdWU3OTcwNTM3ODU= 4848 simplify API reference presentation dcherian 2448579 open 0     0 2021-01-29T17:23:41Z 2021-01-29T17:23:46Z   MEMBER      

Can we remove xarray.core.rolling and core.rolling on the left and right respectively? I think the API reference would be a lot more readable if we could do that

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4848/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
787486472 MDU6SXNzdWU3ODc0ODY0NzI= 4817 Add encoding to HTML repr dcherian 2448579 open 0     0 2021-01-16T15:14:50Z 2021-01-24T17:31:31Z   MEMBER      

Is your feature request related to a problem? Please describe. .encoding is somewhat hidden since we don't show it in a repr.

Describe the solution you'd like I think it'd be nice to add it to the HTML repr, collapsed by default.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4817/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
648250671 MDU6SXNzdWU2NDgyNTA2NzE= 4189 List supported options for `backend_kwargs` in `open_dataset` dcherian 2448579 open 0     0 2020-06-30T15:01:31Z 2020-12-15T04:28:04Z   MEMBER      

We should list supported options for backend_kwargs in the docstring for open_datasetand possibly in io.rst

xref #4187

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4189/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
538521723 MDU6SXNzdWU1Mzg1MjE3MjM= 3630 reviewnb for example notebooks? dcherian 2448579 open 0     0 2019-12-16T16:34:28Z 2019-12-16T16:34:28Z   MEMBER      

What do people think of adding ReviewNB https://www.reviewnb.com/ to facilitate easy reviewing of example notebooks?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3630/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
435787982 MDU6SXNzdWU0MzU3ODc5ODI= 2913 Document xarray data model dcherian 2448579 open 0     0 2019-04-22T16:23:41Z 2019-04-22T16:23:41Z   MEMBER      

It would be nice to have a separate page that detailed this for users unfamiliar with netCDF.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2913/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 149.961ms · About: xarray-datasette