Open
Description
See https://github.com/cubed-dev/cubed/actions/runs/12164228442/job/33925369770
For example (running locally):
pytest -vs 'cubed/tests/test_array_api.py::test_astype[dask]'
========================================================== test session starts ===========================================================
platform darwin -- Python 3.10.15, pytest-8.3.4, pluggy-1.5.0 -- /Users/tom/miniforge3/envs/cubed-dask-m1/bin/python3.10
cachedir: .pytest_cache
rootdir: /Users/tom/workspace/cubed
configfile: pyproject.toml
plugins: cov-6.0.0, mock-3.14.0
collected 1 item
cubed/tests/test_array_api.py::test_astype[dask] FAILED
================================================================ FAILURES ================================================================
___________________________________________________________ test_astype[dask] ____________________________________________________________
spec = cubed.Spec(work_dir=/private/var/folders/9j/h1v35g4166z6zt816fq7wymc0000gn/T/pytest-of-tom/pytest-273/test_astype_dask_0, allowed_mem=100000, reserved_mem=0, executor=None, storage_options=None, zarr_compressor=default)
executor = <cubed.runtime.executors.dask.DaskExecutor object at 0x108115ba0>
def test_astype(spec, executor):
a = xp.asarray([[1, 2, 3], [4, 5, 6], [7, 8, 9]], chunks=(2, 2), spec=spec)
b = xp.astype(a, xp.int32)
> assert_array_equal(
b.compute(executor=executor),
np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
)
cubed/tests/test_array_api.py:153:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../miniforge3/envs/cubed-dask-m1/lib/python3.10/site-packages/numpy/_utils/__init__.py:85: in wrapper
return fun(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<built-in function eq>, array([[1, 2, 0],
[4, 5, 0],
[7, 8, 9]], dtype=int32), array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]))
kwds = {'err_msg': '', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 2 / 9 (22.2%)
E Max absolute difference among violations: 6
E Max relative difference among violations: 1.
E ACTUAL: array([[1, 2, 0],
E [4, 5, 0],
E [7, 8, 9]], dtype=int32)
E DESIRED: array([[1, 2, 3],
E [4, 5, 6],
E [7, 8, 9]])
../../miniforge3/envs/cubed-dask-m1/lib/python3.10/contextlib.py:79: AssertionError
----------------------------------------------------------- Captured log call ------------------------------------------------------------
INFO distributed.http.proxy:proxy.py:85 To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
INFO distributed.scheduler:scheduler.py:1750 State start
INFO distributed.scheduler:scheduler.py:4220 Scheduler at: tcp://127.0.0.1:64249
INFO distributed.scheduler:scheduler.py:4235 dashboard at: http://127.0.0.1:8787/status
INFO distributed.scheduler:scheduler.py:8114 Registering Worker plugin shuffle
INFO distributed.nanny:nanny.py:368 Start Nanny at: 'tcp://127.0.0.1:64252'
INFO distributed.nanny:nanny.py:368 Start Nanny at: 'tcp://127.0.0.1:64253'
INFO distributed.nanny:nanny.py:368 Start Nanny at: 'tcp://127.0.0.1:64254'
INFO distributed.nanny:nanny.py:368 Start Nanny at: 'tcp://127.0.0.1:64255'
INFO distributed.scheduler:scheduler.py:4574 Register worker addr: tcp://127.0.0.1:64263 name: 1
INFO distributed.scheduler:scheduler.py:6168 Starting worker compute stream, tcp://127.0.0.1:64263
INFO distributed.core:core.py:883 Starting established connection to tcp://127.0.0.1:64265
INFO distributed.scheduler:scheduler.py:4574 Register worker addr: tcp://127.0.0.1:64267 name: 3
INFO distributed.scheduler:scheduler.py:6168 Starting worker compute stream, tcp://127.0.0.1:64267
INFO distributed.core:core.py:883 Starting established connection to tcp://127.0.0.1:64271
INFO distributed.scheduler:scheduler.py:4574 Register worker addr: tcp://127.0.0.1:64260 name: 0
INFO distributed.scheduler:scheduler.py:6168 Starting worker compute stream, tcp://127.0.0.1:64260
INFO distributed.core:core.py:883 Starting established connection to tcp://127.0.0.1:64262
INFO distributed.scheduler:scheduler.py:4574 Register worker addr: tcp://127.0.0.1:64266 name: 2
INFO distributed.scheduler:scheduler.py:6168 Starting worker compute stream, tcp://127.0.0.1:64266
INFO distributed.core:core.py:883 Starting established connection to tcp://127.0.0.1:64269
INFO distributed.scheduler:scheduler.py:5922 Receive client connection: Client-158a4e76-b266-11ef-8c05-66948709f435
INFO distributed.core:core.py:883 Starting established connection to tcp://127.0.0.1:64272
INFO distributed.scheduler:scheduler.py:5967 Remove client Client-158a4e76-b266-11ef-8c05-66948709f435
INFO distributed.core:core.py:908 Received 'close-stream' from tcp://127.0.0.1:64272; closing.
INFO distributed.scheduler:scheduler.py:5967 Remove client Client-158a4e76-b266-11ef-8c05-66948709f435
INFO distributed.scheduler:scheduler.py:5959 Close client connection: Client-158a4e76-b266-11ef-8c05-66948709f435
INFO distributed.scheduler:scheduler.py:7553 Retire worker addresses (stimulus_id='retire-workers-1733333700.0394292') (0, 1, 2, 3)
INFO distributed.nanny:nanny.py:611 Closing Nanny at 'tcp://127.0.0.1:64252'. Reason: nanny-close
INFO distributed.nanny:nanny.py:858 Nanny asking worker to close. Reason: nanny-close
INFO distributed.nanny:nanny.py:611 Closing Nanny at 'tcp://127.0.0.1:64253'. Reason: nanny-close
INFO distributed.nanny:nanny.py:858 Nanny asking worker to close. Reason: nanny-close
INFO distributed.nanny:nanny.py:611 Closing Nanny at 'tcp://127.0.0.1:64254'. Reason: nanny-close
INFO distributed.nanny:nanny.py:858 Nanny asking worker to close. Reason: nanny-close
INFO distributed.nanny:nanny.py:611 Closing Nanny at 'tcp://127.0.0.1:64255'. Reason: nanny-close
INFO distributed.nanny:nanny.py:858 Nanny asking worker to close. Reason: nanny-close
INFO distributed.core:core.py:908 Received 'close-stream' from tcp://127.0.0.1:64265; closing.
INFO distributed.scheduler:scheduler.py:5431 Remove worker addr: tcp://127.0.0.1:64263 name: 1 (stimulus_id='handle-worker-cleanup-1733333700.041016')
INFO distributed.core:core.py:908 Received 'close-stream' from tcp://127.0.0.1:64262; closing.
INFO distributed.scheduler:scheduler.py:5431 Remove worker addr: tcp://127.0.0.1:64260 name: 0 (stimulus_id='handle-worker-cleanup-1733333700.041572')
INFO distributed.core:core.py:908 Received 'close-stream' from tcp://127.0.0.1:64269; closing.
INFO distributed.scheduler:scheduler.py:5431 Remove worker addr: tcp://127.0.0.1:64266 name: 2 (stimulus_id='handle-worker-cleanup-1733333700.041902')
INFO distributed.core:core.py:908 Received 'close-stream' from tcp://127.0.0.1:64271; closing.
INFO distributed.scheduler:scheduler.py:5431 Remove worker addr: tcp://127.0.0.1:64267 name: 3 (stimulus_id='handle-worker-cleanup-1733333700.04224')
INFO distributed.scheduler:scheduler.py:5566 Lost all workers
INFO distributed.nanny:nanny.py:626 Nanny at 'tcp://127.0.0.1:64255' closed.
INFO distributed.nanny:nanny.py:626 Nanny at 'tcp://127.0.0.1:64253' closed.
INFO distributed.nanny:nanny.py:626 Nanny at 'tcp://127.0.0.1:64254' closed.
INFO distributed.nanny:nanny.py:626 Nanny at 'tcp://127.0.0.1:64252' closed.
INFO distributed.scheduler:scheduler.py:4284 Closing scheduler. Reason: unknown
INFO distributed.scheduler:scheduler.py:4312 Scheduler closing all comms
======================================================== short test summary info =========================================================
FAILED cubed/tests/test_array_api.py::test_astype[dask] - AssertionError:
=========================================================== 1 failed in 1.83s ============================================================