Skip to content

Make it possible to specify a given SYCL device programmatically #21

@ogrisel

Description

@ogrisel

At the moment the only way to select a given device (cpu, gpu, with level zero or OpenCL or default host device) is via the SYCL_DEVICE_FILTER environment variable. It would be convenient to be able to select a device programmatically (from within a running Python program, e.g. in a notebook) instead.

However I am not sure if and how we should extend the engine API to allow for this. One way to do that would be to do that would be to allow engine provider names to include an extra string spec to select the device:

with config_context(engine_provider="sklearn_numba_dpex:opencl:gpu:0"):
    model.fit(X_train, y_train)

alternatively we could allow for a separate config param named engine_device for instance, in which case we could directly accept dpctl device instance for sycl engine providers.

device = dpctl.SyclDevice("level_zero:gpu:2")
with config_context(engine_provider="sklearn_numba_dpex", engine_device=device):
    model.fit(X_train, y_train)

A similar problem will happen with different device specifications if we implement plugin for CUDA or ROCm backends instead of using the SYCL indirection (e.g. for potential cuML, pykeops plugins or even vanilla numba without dpex).

For reference, pytorch allows to explicitly pass some_data.to(device) and model.to(device) and it the data and the model are not on the same device, calling model(some_data) will fail. It's nice because it's explicit but maybe it not convenient in the case of scikit-learn because

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions