home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

13 rows where comments = 0, type = "issue" and user = 5635139 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 9
  • open 4

type 1

  • issue · 13 ✖

repo 1

  • xarray 13
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1923361961 I_kwDOAMm_X85ypCyp 8263 Surprising `.groupby` behavior with float index max-sixty 5635139 closed 0     0 2023-10-03T05:50:49Z 2024-01-08T01:05:25Z 2024-01-08T01:05:25Z MEMBER      

What is your issue?

We raise an error on grouping without supplying dims, but not for float indexes — is this intentional or an oversight?

This is without flox installed

```python

da = xr.tutorial.open_dataset("air_temperature")['air']

da.drop_vars('lat').groupby('lat').sum() ```

```

ValueError Traceback (most recent call last) Cell In[8], line 1 ----> 1 da.drop_vars('lat').groupby('lat').sum() ... ValueError: cannot reduce over dimensions ['lat']. expected either '...' to reduce over all dimensions or one or more of ('time', 'lon'). ```

But with a float index, we don't raise:

python da.groupby('lat').sum()

...returns the original array:

Out[15]: <xarray.DataArray 'air' (time: 2920, lat: 25, lon: 53)> array([[[296.29 , 296.79 , 297.1 , ..., 296.9 , 296.79 , 296.6 ], [295.9 , 296.19998, 296.79 , ..., 295.9 , 295.9 , 295.19998], [296.6 , 296.19998, 296.4 , ..., 295.4 , 295.1 , 294.69998], ...

And if we try this with a non-float index, we get the error again:

python da.groupby('time').sum()

ValueError: cannot reduce over dimensions ['time']. expected either '...' to reduce over all dimensions or one or more of ('lat', 'lon').

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8263/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1916677049 I_kwDOAMm_X85yPiu5 8245 Tools for writing distributed zarrs max-sixty 5635139 open 0     0 2023-09-28T04:25:45Z 2024-01-04T00:15:09Z   MEMBER      

What is your issue?

There seems to be a common pattern for writing zarrs from a distributed set of machines, in parallel. It's somewhat described in the prose of the io docs. Quoting:

  • Creating the template — "the first step is creating an initial Zarr store without writing all of its array data. This can be done by first creating a Dataset with dummy values stored in dask, and then calling to_zarr with compute=False to write only metadata to Zarr"
  • Writing out each region from workers — "a Zarr store with the correct variable shapes and attributes exists that can be filled out by subsequent calls to to_zarr. The region provides a mapping from dimension names to Python slice objects indicating where the data should be written (in index space, not coordinate space)"

I've been using this fairly successfully recently. It's much better than writing hundreds or thousands of data variables, since many small data variables create a huge number of files.

Are there some tools we can provide to make this easier? Some ideas: - [ ] compute=False is arguably a less-than-obvious kwarg meaning "write metadata". Maybe this should be a method, maybe it's a candidate for renaming? Or maybe make_template can be an abstraction over it. Something like xarray_beam.make_template to make the template from a Dataset? - Or from an array of indexes? - https://github.com/pydata/xarray/issues/8343 - https://github.com/pydata/xarray/pull/8460 - [ ] What happens if one worker's data isn't aligned on some dimensions? Will that write to the wrong location? Could we offer an option, similar to the above, to reindex on the template dimensions?

  • [ ] When writing a region, we need to drop other vars. Can we offer this as a kwarg? Occasionally I'll add a dimension with an index to a dataset, run the function to write it — and it'll fail, because I forgot to add that index to the .drop_vars call that precedes the write. When we're writing a template, all the indexes are written up front anyway. (edit: #6260)
    • https://github.com/pydata/xarray/pull/8460

More minor papercuts: - [ ] I've hit an issue where writing a region seemed to cause the worker to attempt to load the whole array into memory — can we offer guarantees for when (non-metadata) data will be loaded during to_zarr? - [ ] How about adding raise_if_dask_computes to our public API? The alternative I've been doing is watching htop and existing if I see memory ballooning, which is less cerebral... - [ ] It doesn't seem easy to write coords on a DataArray. For example, writing xr.tutorial.load_dataset('air_temperature').assign_coords(lat2=da.lat + 2, a=(('lon',), ['a'] * len(da.lon))).chunk().to_zarr('foo.zarr', compute=False) will cause the non-index coords to be written as empty. But writing them separately conflicts with having a single variable. Currently I manually load each coord before writing, which is not super-friendly.

Some things that were in the list here, as they've been completed!! - [x] Requiring region to be specified as an int range can be inconvenient — would it feasible to have a function that grabs the template metadata, calculates the region ints, and then calculates the implied indexes? - Edit: suggested at https://github.com/pydata/xarray/issues/7702

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8245/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
    xarray 13221727 issue
1192478248 I_kwDOAMm_X85HE8Yo 6440 Add `eval`? max-sixty 5635139 closed 0     0 2022-04-05T00:57:00Z 2023-12-06T17:52:47Z 2023-12-06T17:52:47Z MEMBER      

Is your feature request related to a problem?

We currently have query, which can runs a numexpr string using eval.

Describe the solution you'd like

Should we add an eval method itself? I find that when building something for the command line, allowing people to pass an eval-able expression can be a good interface.

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6440/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1980019336 I_kwDOAMm_X852BLKI 8421 `to_zarr` could transpose dims max-sixty 5635139 closed 0     0 2023-11-06T20:38:35Z 2023-11-14T19:23:08Z 2023-11-14T19:23:08Z MEMBER      

Is your feature request related to a problem?

Currently we need to know the order of dims when using region in to_zarr. Generally in xarray we're fine with the order, because we have the names, so this is a bit of an aberration. It means that code needs to carry around the correct order of dims.

Here's an MCVE:

```python

ds = xr.tutorial.load_dataset('air_temperature')

ds.to_zarr('foo', mode='w')

ds.transpose(..., 'lat').to_zarr('foo', mode='r+')

ValueError: variable 'air' already exists with different dimension names ('time', 'lat', 'lon') != ('time', 'lon', 'lat'), but changing variable dimensions is not supported by to_zarr().

```

Describe the solution you'd like

I think we should be able to transpose them based on the target?

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8421/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1918061661 I_kwDOAMm_X85yU0xd 8251 `.chunk()` doesn't create chunks on 0 dim arrays max-sixty 5635139 open 0     0 2023-09-28T18:30:50Z 2023-09-30T21:31:05Z   MEMBER      

What happened?

.chunk's docstring states:

``` """Coerce this array's data into a dask arrays with the given chunks.

    If this variable is a non-dask array, it will be converted to dask
    array. If it's a dask array, it will be rechunked to the given chunk
    sizes.

```

...but this doesn't happen for 0 dim arrays; example below.

For context, as part of #8245, I had a function that creates a template array. It created an empty DataArray, then expanded dims for each dimension. And it kept blowing up memory! ...until I realized that it was actually not a lazy array.

What did you expect to happen?

It may be that we can't have a 0-dim dask array — but then we should raise in this method, rather than return the wrong thing.

Minimal Complete Verifiable Example

```Python [ins] In [1]: type(xr.DataArray().chunk().data) Out[1]: numpy.ndarray

[ins] In [2]: type(xr.DataArray(1).chunk().data) Out[2]: numpy.ndarray

[ins] In [3]: type(xr.DataArray([1]).chunk().data) Out[3]: dask.array.core.Array ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: 0d6cd2a39f61128e023628c4352f653537585a12 python: 3.9.18 (main, Aug 24 2023, 21:19:58) [Clang 14.0.3 (clang-1403.0.22.14.1)] python-bits: 64 OS: Darwin OS-release: 22.6.0 machine: arm64 processor: arm byteorder: little LC_ALL: en_US.UTF-8 LANG: None LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 2023.8.1.dev25+g8215911a.d20230914 pandas: 2.1.1 numpy: 1.25.2 scipy: 1.11.1 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.16.0 cftime: None nc_time_axis: None PseudoNetCDF: None iris: None bottleneck: None dask: 2023.4.0 distributed: 2023.7.1 matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: 0.2.3.dev30+gd26e29e fsspec: 2021.11.1 cupy: None pint: None sparse: None flox: 0.7.2 numpy_groupies: 0.9.19 setuptools: 68.1.2 pip: 23.2.1 conda: None pytest: 7.4.0 mypy: 1.5.1 IPython: 8.15.0 sphinx: 4.3.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8251/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1917820711 I_kwDOAMm_X85yT58n 8248 `write_empty_chunks` not in `DataArray.to_zarr` max-sixty 5635139 open 0     0 2023-09-28T15:48:22Z 2023-09-28T15:49:35Z   MEMBER      

What is your issue?

Our to_zarr methods on DataArray & Dataset are slightly inconsistent — Dataset.to_zarr has write_empty_chunks and chunkmanager_store_kwargs. They're also in a different order.


Up a level — not sure of the best way of enforcing consistency here; a couple of ideas. - We could have tests that operate on both a DataArray and Dataset, parameterized by fixtures (might also help reduce the duplication in some of our tests), though we then need to make the tests generic. We could have some general tests which just test that methods work, and then delegate to the current per-object tests for finer guarantees. - We could have a tool which collects the differences between DataArray & Dataset methods and snapshots them — then we'll see if they diverge, while allowing for some divergences.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8248/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1125030343 I_kwDOAMm_X85DDpnH 6243 Maintenance improvements max-sixty 5635139 open 0     0 2022-02-05T21:01:51Z 2022-02-05T21:01:51Z   MEMBER      

Is your feature request related to a problem?

At the end of the dev call, we discussed ways to do better at maintenance. I'd like to make Xarray a wonderful place to contribute, partly because it was so formative for me in becoming more involved with software engineering.

Describe the solution you'd like

We've already come far, because of the hard work of many of us!

A few ideas, in increasing order of radical-ness - We looked at @andersy005's dashboards for PRs & Issues. Could we expose this, both to hold ourselves accountable and signal to potential contributors that we care about turnaround time for their contributions? - Is there a systematic way of understanding who should review something? - FWIW a few months ago I looked for a bot that would recommend a reviewer based on who had contributed code in the past, which I think I've seen before. But I couldn't find one generally available. This would be really helpful — we wouldn't have n people each assessing whether they're the best reviewer for each contribution. If anyone does better than me at finding something like this, that would be awesome. - Could we add a label so people can say "now I'm waiting for a review", and track how long those stay up? - Ensuring the 95th percentile is < 2 days is more important than the median being in the hours. It does pain me when I see PRs get dropped for a few weeks. TBC, I'm as responsible as anyone. - Could we have a bot that asks for feedback on the review process — i.e. "I received a prompt and helpful review", "I would recommend a friend contribute to Xarray", etc?

Describe alternatives you've considered

No response

Additional context

There's always a danger with making stats legible that Goodhart's law strikes. And sometimes stats are not joyful, and lots of people come here for joy. So probably there's a tradeoff.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6243/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
874110561 MDU6SXNzdWU4NzQxMTA1NjE= 5248 Appearance of bulleted lists in docs max-sixty 5635139 closed 0     0 2021-05-02T23:21:49Z 2021-05-03T23:23:49Z 2021-05-03T23:23:49Z MEMBER      

What happened:

The new docs are looking great! One small issue — the lists don't appear as lists; e.g.

from https://xarray.pydata.org/en/latest/generated/xarray.Dataset.query.html

Do we need to change the rst convention?

What you expected to happen:

As bullets, with linebreaks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5248/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
569162754 MDU6SXNzdWU1NjkxNjI3NTQ= 3789 Remove groupby with multi-dimensional warning soon max-sixty 5635139 closed 0     0 2020-02-21T20:15:28Z 2020-05-06T16:39:35Z 2020-05-06T16:39:35Z MEMBER      

MCVE Code Sample

We have a very verbose warning in 0.15: it prints on every groupby on an object with multidimensional coords.

So the notebook I'm currently working on has red sections like: /home/mroos/.local/lib/python3.7/site-packages/xarray/core/common.py:664: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, the dimension order of these coordinates will be restored as well unless you specify restore_coord_dims=False. self, group, squeeze=squeeze, restore_coord_dims=restore_coord_dims /home/mroos/.local/lib/python3.7/site-packages/xarray/core/common.py:664: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, the dimension order of these coordinates will be restored as well unless you specify restore_coord_dims=False. self, group, squeeze=squeeze, restore_coord_dims=restore_coord_dims /home/mroos/.local/lib/python3.7/site-packages/xarray/core/common.py:664: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, the dimension order of these coordinates will be restored as well unless you specify restore_coord_dims=False. self, group, squeeze=squeeze, restore_coord_dims=restore_coord_dims /home/mroos/.local/lib/python3.7/site-packages/xarray/core/common.py:664: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, the dimension order of these coordinates will be restored as well unless you specify restore_coord_dims=False. self, group, squeeze=squeeze, restore_coord_dims=restore_coord_dims /home/mroos/.local/lib/python3.7/site-packages/xarray/core/common.py:664: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, the dimension order of these coordinates will be restored as well unless you specify restore_coord_dims=False. self, group, squeeze=squeeze, restore_coord_dims=restore_coord_dims

Unless there's a way of reducing its verbosity (e.g. only print once per session?), let's aim to push the change through and remove the warning soon?

```python

Your code here

In [2]: import xarray as xr

In [4]: import numpy as np

In [16]: da = xr.DataArray(np.random.rand(2,3), dims=list('ab'))

In [17]: da = da.assign_coords(foo=(('a','b'),np.random.rand(2,3)))

In [18]: da.groupby('a').mean(...)
[...]/python3.6/site-packages/xarray/core/common.py:664: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, the dimension order of these coordinates will be restored as well unless you specify restore_coord_dims=False. self, group, squeeze=squeeze, restore_coord_dims=restore_coord_dims Out[18]: <xarray.DataArray (a: 2)> array([0.59216558, 0.58616892]) Dimensions without coordinates: a

```

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] python-bits: 64 OS: Linux OS-release: [...] machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.utf8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: None xarray: 0.15.0 pandas: 0.25.3 numpy: 1.18.1 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.1.2 cartopy: None seaborn: 0.10.0 numbagg: None setuptools: 45.0.0 pip: 20.0.2 conda: None pytest: 5.3.2 IPython: 7.12.0 sphinx: 2.3.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3789/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
587900011 MDU6SXNzdWU1ODc5MDAwMTE= 3892 Update core developer list max-sixty 5635139 closed 0     0 2020-03-25T18:24:17Z 2020-04-07T19:28:25Z 2020-04-07T19:28:25Z MEMBER      

This is out of date: http://xarray.pydata.org/en/stable/roadmap.html#current-core-developers

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3892/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
576471089 MDU6SXNzdWU1NzY0NzEwODk= 3833 html repr fails on non-str Dataset keys max-sixty 5635139 closed 0     0 2020-03-05T19:10:31Z 2020-03-23T05:39:00Z 2020-03-23T05:39:00Z MEMBER      

MCVE Code Sample

```python

In a notebook with html repr enabled

xr.Dataset({0: (('a','b'), np.random.rand(2,3))})

gives:


AttributeError Traceback (most recent call last) /j/office/app/research-python/conda/envs/2019.10/lib/python3.7/site-packages/IPython/core/formatters.py in call(self, obj) 343 method = get_real_method(obj, self.print_method) 344 if method is not None: --> 345 return method() 346 return None 347 else:

~/.local/lib/python3.7/site-packages/xarray/core/dataset.py in repr_html(self) 1632 if OPTIONS["display_style"] == "text": 1633 return f"

{escape(repr(self))}
" -> 1634 return formatting_html.dataset_repr(self) 1635 1636 def info(self, buf=None) -> None:

~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in dataset_repr(ds) 268 dim_section(ds), 269 coord_section(ds.coords), --> 270 datavar_section(ds.data_vars), 271 attr_section(ds.attrs), 272 ]

~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in _mapping_section(mapping, name, details_func, max_items_collapse, enabled) 165 return collapsible_section( 166 name, --> 167 details=details_func(mapping), 168 n_items=n_items, 169 enabled=enabled,

~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in summarize_vars(variables) 131 vars_li = "".join( 132 f"

  • {summarize_variable(k, v)}
  • " --> 133 for k, v in variables.items() 134 ) 135

    ~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in <genexpr>(.0) 131 vars_li = "".join( 132 f"

  • {summarize_variable(k, v)}
  • " --> 133 for k, v in variables.items() 134 ) 135

    ~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in summarize_variable(name, var, is_index, dtype, preview) 96 cssclass_idx = " class='xr-has-index'" if is_index else "" 97 dims_str = f"({', '.join(escape(dim) for dim in var.dims)})" ---> 98 name = escape(name) 99 dtype = dtype or escape(str(var.dtype)) 100

    /j/office/app/research-python/conda/envs/2019.10/lib/python3.7/html/init.py in escape(s, quote) 17 translated. 18 """ ---> 19 s = s.replace("&", "&") # Must be done first! 20 s = s.replace("<", "<") 21 s = s.replace(">", ">")

    AttributeError: 'int' object has no attribute 'replace'

    <xarray.Dataset> Dimensions: (a: 2, b: 3) Dimensions without coordinates: a, b Data variables: 0 (a, b) float64 0.5327 0.927 0.8582 0.8825 0.9478 0.09475 ```

    Problem Description

    I think this may be an uncomplicated fix: coerce the keys to str

    Output of xr.show_versions()

    INSTALLED VERSIONS ------------------ commit: None python: 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 21:52:21) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: ... machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.utf8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.1 xarray: 0.15.0 pandas: 1.0.1 numpy: 1.17.3 scipy: 1.3.2 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 dask: 2.7.0 distributed: 2.7.0 matplotlib: 3.1.2 cartopy: None seaborn: 0.9.0 numbagg: installed setuptools: 41.6.0.post20191101 pip: 19.3.1 conda: None pytest: 5.2.2 IPython: 7.9.0 sphinx: 2.2.1
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3833/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    197412099 MDU6SXNzdWUxOTc0MTIwOTk= 1182 TST: Add Python 3.6 to test environments max-sixty 5635139 closed 0     0 2016-12-23T18:39:29Z 2017-01-22T04:31:04Z 2017-01-22T04:31:04Z MEMBER      
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/1182/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    131218863 MDU6SXNzdWUxMzEyMTg4NjM= 745 Transposing a Dataset causes PeriodIndex to lose its type max-sixty 5635139 closed 0     0 2016-02-04T02:24:05Z 2016-02-09T16:05:28Z 2016-02-09T16:05:28Z MEMBER      

    Note the different types in the final two outputs

    ``` python

    periods = pd.period_range(start='2000', freq='B', periods=6000) np_points = np.random.rand(6000,20)

    period_array = xr.DataArray(np_points) period_array['dim_0']=periods period_array

    Out[87]: <xarray.DataArray (dim_0: 6000, dim_1: 20)> array([[ 0.36453381, 0.65939328, 0.65642922, ..., 0.66950028, 0.03690508, 0.85428786], [ 0.06142194, 0.6391667 , 0.93972185, ..., 0.26272683, 0.17446443, 0.05473016], [ 0.06888458, 0.88798184, 0.7004805 , ..., 0.54081794, 0.11690242, 0.71239621], ..., [ 0.46578244, 0.47498626, 0.11854992, ..., 0.73731368, 0.44784859, 0.24722402], [ 0.02694025, 0.26113875, 0.27635559, ..., 0.6397514 , 0.94297744, 0.50903873], [ 0.2302912 , 0.5255501 , 0.98877204, ..., 0.51659326, 0.5516555 , 0.10720623]]) Coordinates: * dim_0 (dim_0) object 2000-01-03 2000-01-04 2000-01-05 2000-01-06 ... * dim_1 (dim_1) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

    period_array.dim_0.to_index() Out[93]: PeriodIndex(['2000-01-03', '2000-01-04', '2000-01-05', '2000-01-06', '2000-01-07', '2000-01-10', '2000-01-11', '2000-01-12', '2000-01-13', '2000-01-14', ... '2022-12-19', '2022-12-20', '2022-12-21', '2022-12-22', '2022-12-23', '2022-12-26', '2022-12-27', '2022-12-28', '2022-12-29', '2022-12-30'], dtype='int64', name=u'dim_0', length=6000, freq='B') In [95]:

    period_array.transpose('dim_0').dim_0.to_index() Out[95]: Index([2000-01-03, 2000-01-04, 2000-01-05, 2000-01-06, 2000-01-07, 2000-01-10, 2000-01-11, 2000-01-12, 2000-01-13, 2000-01-14, ... 2022-12-19, 2022-12-20, 2022-12-21, 2022-12-22, 2022-12-23, 2022-12-26, 2022-12-27, 2022-12-28, 2022-12-29, 2022-12-30], dtype='object', name=u'dim_0', length=6000)

    ```

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/745/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue

    Advanced export

    JSON shape: default, array, newline-delimited, object

    CSV options:

    CREATE TABLE [issues] (
       [id] INTEGER PRIMARY KEY,
       [node_id] TEXT,
       [number] INTEGER,
       [title] TEXT,
       [user] INTEGER REFERENCES [users]([id]),
       [state] TEXT,
       [locked] INTEGER,
       [assignee] INTEGER REFERENCES [users]([id]),
       [milestone] INTEGER REFERENCES [milestones]([id]),
       [comments] INTEGER,
       [created_at] TEXT,
       [updated_at] TEXT,
       [closed_at] TEXT,
       [author_association] TEXT,
       [active_lock_reason] TEXT,
       [draft] INTEGER,
       [pull_request] TEXT,
       [body] TEXT,
       [reactions] TEXT,
       [performed_via_github_app] TEXT,
       [state_reason] TEXT,
       [repo] INTEGER REFERENCES [repos]([id]),
       [type] TEXT
    );
    CREATE INDEX [idx_issues_repo]
        ON [issues] ([repo]);
    CREATE INDEX [idx_issues_milestone]
        ON [issues] ([milestone]);
    CREATE INDEX [idx_issues_assignee]
        ON [issues] ([assignee]);
    CREATE INDEX [idx_issues_user]
        ON [issues] ([user]);
    Powered by Datasette · Queries took 34.93ms · About: xarray-datasette