home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

1 row where repo = 13221727 and user = 266544 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 1

state 1

  • closed 1

repo 1

  • xarray · 1 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1115878145 I_kwDOAMm_X85CgvMB 6198 [Bug]: Off-main-thread import fails jobh 266544 closed 0     1 2022-01-27T07:45:02Z 2022-01-27T17:14:54Z 2022-01-27T17:14:54Z NONE      

What happened?

Initial import of xarray fails if it happens in a non-main thread.

What did you expect to happen?

Import succeeding on any thread

Minimal Complete Verifiable Example

```python import threading def import_xarray(): import xarray thread = threading.Thread(target=import_xarray) thread.start() thread.join()

-> RuntimeError: There is no current event loop in thread 'Thread-1'.

```

Relevant log output

File "/opt/conda/lib/python3.9/site-packages/xarray/__init__.py", line 1, in <module> from . import testing, tutorial, ufuncs File "/opt/conda/lib/python3.9/site-packages/xarray/tutorial.py", line 13, in <module> from .backends.api import open_dataset as _open_dataset File "/opt/conda/lib/python3.9/site-packages/xarray/backends/__init__.py", line 6, in <module> from .cfgrib_ import CfGribDataStore File "/opt/conda/lib/python3.9/site-packages/xarray/backends/cfgrib_.py", line 16, in <module> from .locks import SerializableLock, ensure_lock File "/opt/conda/lib/python3.9/site-packages/xarray/backends/locks.py", line 13, in <module> from dask.distributed import Lock as DistributedLock File "/opt/conda/lib/python3.9/site-packages/dask/distributed.py", line 11, in <module> from distributed import * File "/opt/conda/lib/python3.9/site-packages/distributed/__init__.py", line 7, in <module> from .actor import Actor, ActorFuture File "/opt/conda/lib/python3.9/site-packages/distributed/actor.py", line 5, in <module> from .client import Future File "/opt/conda/lib/python3.9/site-packages/distributed/client.py", line 59, in <module> from .batched import BatchedSend File "/opt/conda/lib/python3.9/site-packages/distributed/batched.py", line 10, in <module> from .core import CommClosedError File "/opt/conda/lib/python3.9/site-packages/distributed/core.py", line 28, in <module> from .comm import ( File "/opt/conda/lib/python3.9/site-packages/distributed/comm/__init__.py", line 25, in <module> _register_transports() File "/opt/conda/lib/python3.9/site-packages/distributed/comm/__init__.py", line 17, in _register_transports from . import inproc, tcp, ws File "/opt/conda/lib/python3.9/site-packages/distributed/comm/tcp.py", line 387, in <module> class BaseTCPConnector(Connector, RequireEncryptionMixin): File "/opt/conda/lib/python3.9/site-packages/distributed/comm/tcp.py", line 389, in BaseTCPConnector _resolver = netutil.ExecutorResolver(close_executor=False, executor=_executor) File "/opt/conda/lib/python3.9/site-packages/tornado/util.py", line 288, in __new__ instance.initialize(*args, **init_kwargs) File "/opt/conda/lib/python3.9/site-packages/tornado/netutil.py", line 427, in initialize self.io_loop = IOLoop.current() File "/opt/conda/lib/python3.9/site-packages/tornado/ioloop.py", line 263, in current loop = asyncio.get_event_loop() File "/opt/conda/lib/python3.9/asyncio/events.py", line 642, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.' RuntimeError: There is no current event loop in thread 'ThreadPoolExecutor-0_0'.

Anything else we need to know?

This happens with version 0.20.2 from conda-forge. It does not happen with version 0.17.0 that I run locally. This may be related to a change in xarray's dependencies rather than xarray itself.

(I see dask, distributed, tornado, asyncio in the stack trace, but it's impossible for me to decide which of these are "at fault").

Environment

Failing environment is a fresh Docker image with xarray installed from conda-forge.

A previous non-failing Docker image was built in December. I don't have this image anymore, so I can't check versions there.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6198/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 59.822ms · About: xarray-datasette