issues
13 rows where comments = 0, type = "issue" and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1923361961 | I_kwDOAMm_X85ypCyp | 8263 | Surprising `.groupby` behavior with float index | max-sixty 5635139 | closed | 0 | 0 | 2023-10-03T05:50:49Z | 2024-01-08T01:05:25Z | 2024-01-08T01:05:25Z | MEMBER | What is your issue?We raise an error on grouping without supplying dims, but not for float indexes — is this intentional or an oversight?
```python da = xr.tutorial.open_dataset("air_temperature")['air'] da.drop_vars('lat').groupby('lat').sum() ``` ```ValueError Traceback (most recent call last) Cell In[8], line 1 ----> 1 da.drop_vars('lat').groupby('lat').sum() ... ValueError: cannot reduce over dimensions ['lat']. expected either '...' to reduce over all dimensions or one or more of ('time', 'lon'). ``` But with a float index, we don't raise:
...returns the original array:
And if we try this with a non-float index, we get the error again:
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 1916677049 | I_kwDOAMm_X85yPiu5 | 8245 | Tools for writing distributed zarrs | max-sixty 5635139 | open | 0 | 0 | 2023-09-28T04:25:45Z | 2024-01-04T00:15:09Z | MEMBER | What is your issue?There seems to be a common pattern for writing zarrs from a distributed set of machines, in parallel. It's somewhat described in the prose of the io docs. Quoting:
I've been using this fairly successfully recently. It's much better than writing hundreds or thousands of data variables, since many small data variables create a huge number of files. Are there some tools we can provide to make this easier? Some ideas:
- [ ]
More minor papercuts:
- [ ] I've hit an issue where writing a region seemed to cause the worker to attempt to load the whole array into memory — can we offer guarantees for when (non-metadata) data will be loaded during Some things that were in the list here, as they've been completed!!
- [x] Requiring |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8245/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} |
xarray 13221727 | issue | ||||||||
| 1192478248 | I_kwDOAMm_X85HE8Yo | 6440 | Add `eval`? | max-sixty 5635139 | closed | 0 | 0 | 2022-04-05T00:57:00Z | 2023-12-06T17:52:47Z | 2023-12-06T17:52:47Z | MEMBER | Is your feature request related to a problem?We currently have Describe the solution you'd likeShould we add an Describe alternatives you've consideredNo response Additional contextNo response |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 1980019336 | I_kwDOAMm_X852BLKI | 8421 | `to_zarr` could transpose dims | max-sixty 5635139 | closed | 0 | 0 | 2023-11-06T20:38:35Z | 2023-11-14T19:23:08Z | 2023-11-14T19:23:08Z | MEMBER | Is your feature request related to a problem?Currently we need to know the order of dims when using Here's an MCVE: ```python ds = xr.tutorial.load_dataset('air_temperature') ds.to_zarr('foo', mode='w') ds.transpose(..., 'lat').to_zarr('foo', mode='r+') ValueError: variable 'air' already exists with different dimension names ('time', 'lat', 'lon') != ('time', 'lon', 'lat'), but changing variable dimensions is not supported by to_zarr().``` Describe the solution you'd likeI think we should be able to transpose them based on the target? Describe alternatives you've consideredNo response Additional contextNo response |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8421/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 1918061661 | I_kwDOAMm_X85yU0xd | 8251 | `.chunk()` doesn't create chunks on 0 dim arrays | max-sixty 5635139 | open | 0 | 0 | 2023-09-28T18:30:50Z | 2023-09-30T21:31:05Z | MEMBER | What happened?
``` """Coerce this array's data into a dask arrays with the given chunks.
``` ...but this doesn't happen for 0 dim arrays; example below. For context, as part of #8245, I had a function that creates a template array. It created an empty What did you expect to happen?It may be that we can't have a 0-dim dask array — but then we should raise in this method, rather than return the wrong thing. Minimal Complete Verifiable Example```Python [ins] In [1]: type(xr.DataArray().chunk().data) Out[1]: numpy.ndarray [ins] In [2]: type(xr.DataArray(1).chunk().data) Out[2]: numpy.ndarray [ins] In [3]: type(xr.DataArray([1]).chunk().data) Out[3]: dask.array.core.Array ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: 0d6cd2a39f61128e023628c4352f653537585a12
python: 3.9.18 (main, Aug 24 2023, 21:19:58)
[Clang 14.0.3 (clang-1403.0.22.14.1)]
python-bits: 64
OS: Darwin
OS-release: 22.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: None
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2023.8.1.dev25+g8215911a.d20230914
pandas: 2.1.1
numpy: 1.25.2
scipy: 1.11.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.16.0
cftime: None
nc_time_axis: None
PseudoNetCDF: None
iris: None
bottleneck: None
dask: 2023.4.0
distributed: 2023.7.1
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: 0.2.3.dev30+gd26e29e
fsspec: 2021.11.1
cupy: None
pint: None
sparse: None
flox: 0.7.2
numpy_groupies: 0.9.19
setuptools: 68.1.2
pip: 23.2.1
conda: None
pytest: 7.4.0
mypy: 1.5.1
IPython: 8.15.0
sphinx: 4.3.2
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1917820711 | I_kwDOAMm_X85yT58n | 8248 | `write_empty_chunks` not in `DataArray.to_zarr` | max-sixty 5635139 | open | 0 | 0 | 2023-09-28T15:48:22Z | 2023-09-28T15:49:35Z | MEMBER | What is your issue?Our Up a level — not sure of the best way of enforcing consistency here; a couple of ideas.
- We could have tests that operate on both a |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1125030343 | I_kwDOAMm_X85DDpnH | 6243 | Maintenance improvements | max-sixty 5635139 | open | 0 | 0 | 2022-02-05T21:01:51Z | 2022-02-05T21:01:51Z | MEMBER | Is your feature request related to a problem?At the end of the dev call, we discussed ways to do better at maintenance. I'd like to make Xarray a wonderful place to contribute, partly because it was so formative for me in becoming more involved with software engineering. Describe the solution you'd likeWe've already come far, because of the hard work of many of us! A few ideas, in increasing order of radical-ness - We looked at @andersy005's dashboards for PRs & Issues. Could we expose this, both to hold ourselves accountable and signal to potential contributors that we care about turnaround time for their contributions? - Is there a systematic way of understanding who should review something? - FWIW a few months ago I looked for a bot that would recommend a reviewer based on who had contributed code in the past, which I think I've seen before. But I couldn't find one generally available. This would be really helpful — we wouldn't have n people each assessing whether they're the best reviewer for each contribution. If anyone does better than me at finding something like this, that would be awesome. - Could we add a label so people can say "now I'm waiting for a review", and track how long those stay up? - Ensuring the 95th percentile is < 2 days is more important than the median being in the hours. It does pain me when I see PRs get dropped for a few weeks. TBC, I'm as responsible as anyone. - Could we have a bot that asks for feedback on the review process — i.e. "I received a prompt and helpful review", "I would recommend a friend contribute to Xarray", etc? Describe alternatives you've consideredNo response Additional contextThere's always a danger with making stats legible that Goodhart's law strikes. And sometimes stats are not joyful, and lots of people come here for joy. So probably there's a tradeoff. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 874110561 | MDU6SXNzdWU4NzQxMTA1NjE= | 5248 | Appearance of bulleted lists in docs | max-sixty 5635139 | closed | 0 | 0 | 2021-05-02T23:21:49Z | 2021-05-03T23:23:49Z | 2021-05-03T23:23:49Z | MEMBER | What happened: The new docs are looking great! One small issue — the lists don't appear as lists; e.g.
from https://xarray.pydata.org/en/latest/generated/xarray.Dataset.query.html Do we need to change the rst convention? What you expected to happen: As bullets, with linebreaks |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 569162754 | MDU6SXNzdWU1NjkxNjI3NTQ= | 3789 | Remove groupby with multi-dimensional warning soon | max-sixty 5635139 | closed | 0 | 0 | 2020-02-21T20:15:28Z | 2020-05-06T16:39:35Z | 2020-05-06T16:39:35Z | MEMBER | MCVE Code SampleWe have a very verbose warning in 0.15: it prints on every groupby on an object with multidimensional coords. So the notebook I'm currently working on has red sections like:
Unless there's a way of reducing its verbosity (e.g. only print once per session?), let's aim to push the change through and remove the warning soon? ```python Your code hereIn [2]: import xarray as xr In [4]: import numpy as np In [16]: da = xr.DataArray(np.random.rand(2,3), dims=list('ab')) In [17]: da = da.assign_coords(foo=(('a','b'),np.random.rand(2,3))) In [18]: da.groupby('a').mean(...) ``` Output of
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 587900011 | MDU6SXNzdWU1ODc5MDAwMTE= | 3892 | Update core developer list | max-sixty 5635139 | closed | 0 | 0 | 2020-03-25T18:24:17Z | 2020-04-07T19:28:25Z | 2020-04-07T19:28:25Z | MEMBER | This is out of date: http://xarray.pydata.org/en/stable/roadmap.html#current-core-developers |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3892/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 576471089 | MDU6SXNzdWU1NzY0NzEwODk= | 3833 | html repr fails on non-str Dataset keys | max-sixty 5635139 | closed | 0 | 0 | 2020-03-05T19:10:31Z | 2020-03-23T05:39:00Z | 2020-03-23T05:39:00Z | MEMBER | MCVE Code Sample```python In a notebook with html repr enabledxr.Dataset({0: (('a','b'), np.random.rand(2,3))}) gives:AttributeError Traceback (most recent call last) /j/office/app/research-python/conda/envs/2019.10/lib/python3.7/site-packages/IPython/core/formatters.py in call(self, obj) 343 method = get_real_method(obj, self.print_method) 344 if method is not None: --> 345 return method() 346 return None 347 else: ~/.local/lib/python3.7/site-packages/xarray/core/dataset.py in repr_html(self) 1632 if OPTIONS["display_style"] == "text": 1633 return f" {escape(repr(self))}"
-> 1634 return formatting_html.dataset_repr(self)
1635
1636 def info(self, buf=None) -> None:
~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in dataset_repr(ds) 268 dim_section(ds), 269 coord_section(ds.coords), --> 270 datavar_section(ds.data_vars), 271 attr_section(ds.attrs), 272 ] ~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in _mapping_section(mapping, name, details_func, max_items_collapse, enabled) 165 return collapsible_section( 166 name, --> 167 details=details_func(mapping), 168 n_items=n_items, 169 enabled=enabled, ~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in summarize_vars(variables) 131 vars_li = "".join( 132 f" ~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in <genexpr>(.0) 131 vars_li = "".join( 132 f" ~/.local/lib/python3.7/site-packages/xarray/core/formatting_html.py in summarize_variable(name, var, is_index, dtype, preview) 96 cssclass_idx = " class='xr-has-index'" if is_index else "" 97 dims_str = f"({', '.join(escape(dim) for dim in var.dims)})" ---> 98 name = escape(name) 99 dtype = dtype or escape(str(var.dtype)) 100 /j/office/app/research-python/conda/envs/2019.10/lib/python3.7/html/init.py in escape(s, quote) 17 translated. 18 """ ---> 19 s = s.replace("&", "&") # Must be done first! 20 s = s.replace("<", "<") 21 s = s.replace(">", ">") AttributeError: 'int' object has no attribute 'replace' <xarray.Dataset> Dimensions: (a: 2, b: 3) Dimensions without coordinates: a, b Data variables: 0 (a, b) float64 0.5327 0.927 0.8582 0.8825 0.9478 0.09475 ``` Problem DescriptionI think this may be an uncomplicated fix: coerce the keys to Output of
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 197412099 | MDU6SXNzdWUxOTc0MTIwOTk= | 1182 | TST: Add Python 3.6 to test environments | max-sixty 5635139 | closed | 0 | 0 | 2016-12-23T18:39:29Z | 2017-01-22T04:31:04Z | 2017-01-22T04:31:04Z | MEMBER | {
"url": "https://api.github.com/repos/pydata/xarray/issues/1182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | |||||||
| 131218863 | MDU6SXNzdWUxMzEyMTg4NjM= | 745 | Transposing a Dataset causes PeriodIndex to lose its type | max-sixty 5635139 | closed | 0 | 0 | 2016-02-04T02:24:05Z | 2016-02-09T16:05:28Z | 2016-02-09T16:05:28Z | MEMBER | Note the different types in the final two outputs ``` python periods = pd.period_range(start='2000', freq='B', periods=6000) np_points = np.random.rand(6000,20) period_array = xr.DataArray(np_points) period_array['dim_0']=periods period_array Out[87]: <xarray.DataArray (dim_0: 6000, dim_1: 20)> array([[ 0.36453381, 0.65939328, 0.65642922, ..., 0.66950028, 0.03690508, 0.85428786], [ 0.06142194, 0.6391667 , 0.93972185, ..., 0.26272683, 0.17446443, 0.05473016], [ 0.06888458, 0.88798184, 0.7004805 , ..., 0.54081794, 0.11690242, 0.71239621], ..., [ 0.46578244, 0.47498626, 0.11854992, ..., 0.73731368, 0.44784859, 0.24722402], [ 0.02694025, 0.26113875, 0.27635559, ..., 0.6397514 , 0.94297744, 0.50903873], [ 0.2302912 , 0.5255501 , 0.98877204, ..., 0.51659326, 0.5516555 , 0.10720623]]) Coordinates: * dim_0 (dim_0) object 2000-01-03 2000-01-04 2000-01-05 2000-01-06 ... * dim_1 (dim_1) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 period_array.dim_0.to_index() Out[93]: PeriodIndex(['2000-01-03', '2000-01-04', '2000-01-05', '2000-01-06', '2000-01-07', '2000-01-10', '2000-01-11', '2000-01-12', '2000-01-13', '2000-01-14', ... '2022-12-19', '2022-12-20', '2022-12-21', '2022-12-22', '2022-12-23', '2022-12-26', '2022-12-27', '2022-12-28', '2022-12-29', '2022-12-30'], dtype='int64', name=u'dim_0', length=6000, freq='B') In [95]: period_array.transpose('dim_0').dim_0.to_index() Out[95]: Index([2000-01-03, 2000-01-04, 2000-01-05, 2000-01-06, 2000-01-07, 2000-01-10, 2000-01-11, 2000-01-12, 2000-01-13, 2000-01-14, ... 2022-12-19, 2022-12-20, 2022-12-21, 2022-12-22, 2022-12-23, 2022-12-26, 2022-12-27, 2022-12-28, 2022-12-29, 2022-12-30], dtype='object', name=u'dim_0', length=6000) ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);
