home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

9 rows where comments = 5, state = "closed" and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 5
  • issue 4

state 1

  • closed · 9 ✖

repo 1

  • xarray 9
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
945560052 MDExOlB1bGxSZXF1ZXN0NjkwODcyNTk1 5610 Fix gen_cluster failures; dask_version tweaks crusaderky 6213168 closed 0     5 2021-07-15T16:26:21Z 2021-07-15T18:04:00Z 2021-07-15T17:25:43Z MEMBER   0 pydata/xarray/pulls/5610
  • fixes one of the issues reported in #5600
  • distributed.utils_test.gen_cluster no longer accepts timeout=None for the sake of robustness
  • deleted ancient dask backwards compatibility code
  • clean up code around dask.__version__
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5610/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
509655174 MDExOlB1bGxSZXF1ZXN0MzMwMTYwMDQy 3420 Restore crashing CI tests on pseudonetcdf-3.1 crusaderky 6213168 closed 0     5 2019-10-20T21:26:40Z 2019-10-21T01:32:54Z 2019-10-20T22:42:36Z MEMBER   0 pydata/xarray/pulls/3420

Related to #3409

The crashes caused by pseudonetcdf-3.1 are blocking all PRs. Sorry I don't know anything about pseudonetcdf. This PR takes the issue out of the critical path so that whoever knows about the library can deal with it in due time.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3420/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
503163130 MDExOlB1bGxSZXF1ZXN0MzI1MDc2MzQ5 3375 Speed up isel and __getitem__ crusaderky 6213168 closed 0 crusaderky 6213168   5 2019-10-06T21:27:42Z 2019-10-10T09:21:56Z 2019-10-09T18:01:30Z MEMBER   0 pydata/xarray/pulls/3375

First iterative improvement for #2799.

Speed up Dataset.isel up to 33% and DataArray.isel up to 25% (when there are no indices and the numpy array is small). 15% speedup when there are indices.

Benchmarks can be found in #2799.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3375/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
478886013 MDExOlB1bGxSZXF1ZXN0MzA1OTA3Mzk2 3196 One-off isort run crusaderky 6213168 closed 0     5 2019-08-09T09:17:39Z 2019-09-09T08:28:05Z 2019-08-23T20:33:04Z MEMBER   0 pydata/xarray/pulls/3196

A one-off, manually vetted and tweaked isort run

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3196/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
202423683 MDU6SXNzdWUyMDI0MjM2ODM= 1224 fast weighted sum crusaderky 6213168 closed 0     5 2017-01-23T00:29:19Z 2019-08-09T08:36:11Z 2019-08-09T08:36:11Z MEMBER      

In my project I'm struggling with weighted sums of 2000-4000 dask-based xarrays. The time to reach the final dask-based array, the size of the final dask dict, and the time to compute the actual result are horrendous.

So I wrote the below which - as laborious as it may look - gives a performance boost nothing short of miraculous. At the bottom you'll find some benchmarks as well.

https://gist.github.com/crusaderky/62832a5ffc72ccb3e0954021b0996fdf

In my project, this deflated the size of the final dask dict from 5.2 million keys to 3.3 million and cut a 30% from the time required to define it.

I think it's generic enough to be a good addition to the core xarray module. Impressions?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1224/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
465984161 MDU6SXNzdWU0NjU5ODQxNjE= 3089 Python 3.5.0-3.5.1 support crusaderky 6213168 closed 0     5 2019-07-09T21:04:28Z 2019-07-13T21:58:31Z 2019-07-13T21:58:31Z MEMBER      

Python 3.5.0 has gone out of the conda-forge repository. 3.5.1 is still there... for now. The anaconda repository starts directly from 3.5.4. 3.5.0 and 3.5.1 are a colossal pain in the back for typing support. Is this a good time to increase the requirement to >= 3.5.2? I honestly can't think how anybody could be unable to upgrade to the latest available 3.5 with minimal effort...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3089/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
324040111 MDU6SXNzdWUzMjQwNDAxMTE= 2149 [REGRESSION] to_netcdf doesn't accept dtype=S1 encoding anymore crusaderky 6213168 closed 0     5 2018-05-17T14:09:15Z 2018-06-01T01:09:38Z 2018-06-01T01:09:38Z MEMBER      

In xarray 0.10.4, the dtype encoding in to_netcdf has stopped working, for all engines: ```

import xarray ds = xarray.Dataset({'x': ['foo', 'bar', 'baz']}) ds.to_netcdf('test.nc', encoding={'x': {'dtype': 'S1'}}) [...]

xarray/backends/netCDF4_.py in _extract_nc4_variable_encoding(variable, raise_on_invalid, lsd_okay, h5py_okay, backend, unlimited_dims) 196 if invalid: 197 raise ValueError('unexpected encoding parameters for %r backend: ' --> 198 ' %r' % (backend, invalid)) 199 else: 200 for k in list(encoding):

ValueError: unexpected encoding parameters for 'netCDF4' backend: ['dtype'] ``` I'm still trying to figure out how the regression tests didn't pick it up and what change introduced it.

@shoyer I'm working on this as my top priority. Do you agree this is serious enough for an emergency re-release? (0.10.4.1 or 0.10.5, your choice)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2149/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
252547273 MDU6SXNzdWUyNTI1NDcyNzM= 1523 Pass arguments to dask.compute() crusaderky 6213168 closed 0     5 2017-08-24T09:48:14Z 2017-09-05T19:55:46Z 2017-09-05T19:55:46Z MEMBER      

I work with a very large dask-based algorithm in xarray, and I do my optimization by hand before hitting compute(). In other cases, I need using multiple dask schedulers at once (e.g. a multithreaded one for numpy-based work and a multiprocessing one for pure python work).

This change proposal (which I'm happy to do) is about accepting *args, **kwds parameters in all .compute(), .load(), and .persist() xarray methods and pass them verbatim to the underlying dask compute() and persist() functions.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1523/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
189544925 MDExOlB1bGxSZXF1ZXN0OTM4NzYxMTY= 1124 full_like, zeros_like, ones_like crusaderky 6213168 closed 0     5 2016-11-16T00:10:03Z 2016-11-28T03:42:47Z 2016-11-28T03:42:39Z MEMBER   0 pydata/xarray/pulls/1124

New top-level functions. Fixes #1102

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1124/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 33.272ms · About: xarray-datasette