home / github / issues

Menu
  • Search all tables
  • GraphQL API

issues: 276241193

This data as json

id node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
276241193 MDU6SXNzdWUyNzYyNDExOTM= 1738 Windows/Python 2.7 tests of dask-distributed failing on master/v0.10.0 1217238 closed 0     12 2017-11-23T00:42:29Z 2018-10-09T04:13:41Z 2018-10-09T04:13:41Z MEMBER      

Python 2.7 builds on Windows are failing: https://ci.appveyor.com/project/shoyer/xray/build/1.0.3018

The tests that are failing are all variations of test_dask_distributed_integration_test. Example error message: ``` =================================== ERRORS ==================================== _ ERROR at teardown of test_dask_distributed_integration_test[scipy] ____ @pytest.fixture def loop(): with pristine_loop() as loop: # Monkey-patch IOLoop.start to wait for loop stop orig_start = loop.start is_stopped = threading.Event() is_stopped.set() def start(): is_stopped.clear() try: orig_start() finally: is_stopped.set() loop.start = start

        yield loop
        # Stop the loop in case it's still running
        try:
            loop.add_callback(loop.stop)
        except RuntimeError as e:
            if not re.match("IOLoop is clos(ed|ing)", str(e)):
                raise
        else:
          is_stopped.wait()

C:\Python27-conda64\envs\test_env\lib\site-packages\distributed\utils_test.py:102:


C:\Python27-conda64\envs\test_env\lib\contextlib.py:24: in exit self.gen.next() C:\Python27-conda64\envs\test_env\lib\site-packages\distributed\utils_test.py:139: in pristine_loop loop.close(all_fds=True) C:\Python27-conda64\envs\test_env\lib\site-packages\tornado\ioloop.py:716: in close self.remove_handler(self._waker.fileno()) C:\Python27-conda64\envs\test_env\lib\site-packages\tornado\platform\common.py:91: in fileno return self.reader.fileno() C:\Python27-conda64\envs\test_env\lib\socket.py:228: in meth return getattr(self._sock,name)(*args)


args = (<socket._closedsocket object at 0x00000000131F27F0>, 'fileno') def _dummy(*args):

  raise error(EBADF, 'Bad file descriptor')

E error: [Errno 9] Bad file descriptor C:\Python27-conda64\envs\test_env\lib\socket.py:174: error ---------------------------- Captured stderr call ----------------------------- distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:1094 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:1096 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:1095 distributed.worker - INFO - Listening to: tcp://127.0.0.1:1096 distributed.worker - INFO - Listening to: tcp://127.0.0.1:1095 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:1094 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:1094 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 2.00 GB distributed.worker - INFO - Memory: 2.00 GB distributed.worker - INFO - Local Directory: C:\projects\xray_test_worker-4043f797-3668-459a-9d5b-017dbc092ad5\worker-ozlw8t distributed.worker - INFO - Local Directory: C:\projects\xray_test_worker-0b2d640d-07ba-493f-967c-f8d8de38e3b5\worker-_xbrz6 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register tcp://127.0.0.1:1096 distributed.worker - INFO - Registered to: tcp://127.0.0.1:1094 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register tcp://127.0.0.1:1095 distributed.worker - INFO - Registered to: tcp://127.0.0.1:1094 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:1095 distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:1096 distributed.scheduler - INFO - Receive client connection: Client-06708a40-ce25-11e7-898c-00155d57f2dd distributed.scheduler - INFO - Connection to client Client-06708a40-ce25-11e7-898c-00155d57f2dd broken distributed.scheduler - INFO - Remove client Client-06708a40-ce25-11e7-898c-00155d57f2dd distributed.scheduler - INFO - Close client connection: Client-06708a40-ce25-11e7-898c-00155d57f2dd distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:1095 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:1096 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:1095 distributed.scheduler - INFO - Remove worker tcp://127.0.0.1:1096 distributed.scheduler - INFO - Lost all workers distributed.worker - INFO - Close compute stream distributed.worker - INFO - Close compute stream distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms ```

@mrocklin any guesses about what this could be?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1738/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed 13221727 issue

Links from other tables

  • 0 rows from issues_id in issues_labels
  • 12 rows from issue in issue_comments
Powered by Datasette · Queries took 0.985ms · About: xarray-datasette