diff --git a/docs/other_devices_support/multi_devices_use_guide.en.md b/docs/other_devices_support/multi_devices_use_guide.en.md
index 38de4766f5..d71b345513 100644
--- a/docs/other_devices_support/multi_devices_use_guide.en.md
+++ b/docs/other_devices_support/multi_devices_use_guide.en.md
@@ -18,6 +18,8 @@ Kunlun XPU: [Kunlun XPU PaddlePaddle Installation Guide](./paddlepaddle_install_
Hygon DCU: [Hygon DCU PaddlePaddle Installation Guide](./paddlepaddle_install_DCU.en.md)
+Enflame GCU: [Enflame GCU PaddlePaddle Installation Guide](./paddlepaddle_install_GCU.en.md)
+
### 1.2 PaddleX Installation
Welcome to use PaddlePaddle's low-code development tool, PaddleX. Before we officially start the local installation, please clarify your development needs and choose the appropriate installation mode based on your requirements.
@@ -164,4 +166,4 @@ All packages are installed.
## 2. Usage
-The usage of PaddleX model pipeline development tool on hardware platforms such as Ascend NPU, Cambricon MLU, Kunlun XPU, and Hygon DCU is identical to that on GPU. You only need to modify the device configuration parameters according to your hardware platform. For detailed usage tutorials, please refer to [PaddleX Pipeline Development Tool Local Usage Guide](../pipeline_usage/pipeline_develop_guide.en.md).
+The usage of PaddleX model pipeline development tool on hardware platforms such as Ascend NPU, Cambricon MLU, Kunlun XPU, Hygon DCU and Enflame GCU is identical to that on GPU. You only need to modify the device configuration parameters according to your hardware platform. For detailed usage tutorials, please refer to [PaddleX Pipeline Development Tool Local Usage Guide](../pipeline_usage/pipeline_develop_guide.en.md).
diff --git a/docs/other_devices_support/multi_devices_use_guide.md b/docs/other_devices_support/multi_devices_use_guide.md
index a7e5e387f2..378050a7c4 100644
--- a/docs/other_devices_support/multi_devices_use_guide.md
+++ b/docs/other_devices_support/multi_devices_use_guide.md
@@ -18,6 +18,8 @@ comments: true
海光 DCU:[海光 DCU 飞桨安装教程](./paddlepaddle_install_DCU.md)
+燧原 GCU:[燧原 GCU 飞桨安装教程](./paddlepaddle_install_GCU.md)
+
### 1.2 PaddleX安装
欢迎您使用飞桨低代码开发工具PaddleX,在我们正式开始本地安装之前,请先明确您的开发需求,并根据您的需求选择合适的安装模式。
@@ -161,4 +163,4 @@ paddlex --install --platform gitee.com
All packages are installed.
```
## 2、使用
-基于昇腾 NPU、寒武纪 MLU、昆仑 XPU、海光DCU 硬件平台的 PaddleX 模型产线开发工具使用方法与 GPU 相同,只需根据所属硬件平台,修改配置设备的参数,详细的使用教程可以查阅[PaddleX产线开发工具本地使用教程](../pipeline_usage/pipeline_develop_guide.md)
+基于昇腾 NPU、寒武纪 MLU、昆仑 XPU、海光DCU、燧原 GCU 硬件平台的 PaddleX 模型产线开发工具使用方法与 GPU 相同,只需根据所属硬件平台,修改配置设备的参数,详细的使用教程可以查阅[PaddleX产线开发工具本地使用教程](../pipeline_usage/pipeline_develop_guide.md)
diff --git a/docs/other_devices_support/paddlepaddle_install_GCU.en.md b/docs/other_devices_support/paddlepaddle_install_GCU.en.md
new file mode 100644
index 0000000000..58cb79e191
--- /dev/null
+++ b/docs/other_devices_support/paddlepaddle_install_GCU.en.md
@@ -0,0 +1,43 @@
+---
+comments: true
+---
+
+# Enflame GCU PaddlePaddle Installation Tutorial
+
+Currently, PaddleX supports the Enflame S60 chip. Considering environmental differences, we recommend using the Enflame development image provided by PaddlePaddle to complete the environment preparation.
+
+## 1. Docker Environment Preparation
+* Pull the image. This image is only for the development environment and does not contain a pre-compiled PaddlePaddle installation package. The image has TopsRider, the Enflame basic runtime environment library, installed by default.
+```bash
+# For X86 architecture
+docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/device/paddle-gcu:topsrider3.2.109-ubuntu20-x86_64-gcc84
+```
+* Start the container with the following command.
+```bash
+docker run --name paddle-gcu-dev -v /home:/home \
+ --network=host --ipc=host -it --privileged \
+ ccr-2vdh3abv-pub.cnc.bj.baidubce.com/device/paddle-gcu:topsrider3.2.109-ubuntu20-x86_64-gcc84 /bin/bash
+```
+* Install the driver **outside of docker**. Please refer to the environment preparation section of [PaddlePaddle Custom Device Implementation for Enflame GCU](https://github.com/PaddlePaddle/PaddleCustomDevice/blob/develop/backends/gcu/README.md).
+```bash
+bash TopsRider_i3x_*_deb_amd64.run --driver --no-auto-load
+```
+## 2. Install Paddle Package
+Download and install the wheel package released by PaddlePaddle within the docker container. Currently, Python 3.10 wheel installation packages are provided. If you require other Python versions, refer to the [PaddlePaddle official documentation](https://www.paddlepaddle.org.cn/en/install/quick) for compilation and installation.
+
+* Download and install the wheel package.
+```bash
+# Note: You need to install the CPU version of PaddlePaddle first
+python -m pip install paddlepaddle==3.0.0.dev20241127 -i https://www.paddlepaddle.org.cn/packages/nightly/cpu/
+python -m pip install paddle_custom_gcu==3.0.0.dev20241127 -i https://www.paddlepaddle.org.cn/packages/nightly/gcu/
+```
+* Verify the installation package. After installation, run the following command:
+```bash
+python -c "import paddle; paddle.utils.run_check()"
+```
+* Expect to get output like this:
+```bash
+Running verify PaddlePaddle program ...
+PaddlePaddle works well on 1 gcu.
+PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.
+```
diff --git a/docs/other_devices_support/paddlepaddle_install_GCU.md b/docs/other_devices_support/paddlepaddle_install_GCU.md
new file mode 100644
index 0000000000..dee44bfa0c
--- /dev/null
+++ b/docs/other_devices_support/paddlepaddle_install_GCU.md
@@ -0,0 +1,44 @@
+---
+comments: true
+---
+
+# 燧原 GCU 飞桨安装教程
+
+当前 PaddleX 支持燧原 S60 芯片。考虑到环境差异性,我们推荐使用飞桨官方提供的燧原 GCU 开发镜像完成环境准备。
+
+## 1、docker环境准备
+* 拉取镜像,此镜像仅为开发环境,镜像中不包含预编译的飞桨安装包,镜像中已经默认安装了燧原软件栈 TopsRider。
+```bash
+# 适用于 X86 架构
+docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/device/paddle-gcu:topsrider3.2.109-ubuntu20-x86_64-gcc84
+```
+* 参考如下命令启动容器
+```bash
+docker run --name paddle-gcu-dev -v /home:/home \
+ --network=host --ipc=host -it --privileged \
+ ccr-2vdh3abv-pub.cnc.bj.baidubce.com/device/paddle-gcu:topsrider3.2.109-ubuntu20-x86_64-gcc84 /bin/bash
+```
+* **容器外**安装驱动程序。可以参考[飞桨自定义接入硬件后端(GCU)](https://github.com/PaddlePaddle/PaddleCustomDevice/blob/develop/backends/gcu/README_cn.md)环境准备章节。
+```bash
+bash TopsRider_i3x_*_deb_amd64.run --driver --no-auto-load
+```
+## 2、安装paddle包
+在启动的 docker 容器中,下载并安装飞桨官网发布的 wheel 包。当前提供 Python3.10 的 wheel 安装包。如有其他 Python 版本需求,可以参考[飞桨官方文档](https://www.paddlepaddle.org.cn/install/quick)自行编译安装。
+
+* 下载并安装 wheel 包。
+```bash
+# 注意需要先安装飞桨 cpu 版本
+python -m pip install paddlepaddle==3.0.0.dev20241127 -i https://www.paddlepaddle.org.cn/packages/nightly/cpu/
+python -m pip install paddle_custom_gcu==3.0.0.dev20241127 -i https://www.paddlepaddle.org.cn/packages/nightly/gcu/
+```
+* 验证安装包:安装完成之后,运行如下命令:
+```bash
+python -c "import paddle; paddle.utils.run_check()"
+```
+预期得到类似如下输出结果:
+
+```bash
+Running verify PaddlePaddle program ...
+PaddlePaddle works well on 1 gcu.
+PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.
+```
diff --git a/docs/support_list/model_list_gcu.en.md b/docs/support_list/model_list_gcu.en.md
new file mode 100644
index 0000000000..cc1ec1e566
--- /dev/null
+++ b/docs/support_list/model_list_gcu.en.md
@@ -0,0 +1,26 @@
+---
+comments: true
+---
+
+# PaddleX Model List (Enflame GCU)
+
+PaddleX incorporates multiple pipelines, each containing several modules, and each module encompasses various models. You can select the appropriate models based on the benchmark data below. If you prioritize model accuracy, choose models with higher accuracy. If you prioritize model size, select models with smaller storage requirements.
+
+## Image Classification Module
+
+Note: The above accuracy metrics refer to Top-1 Accuracy on the [ImageNet-1k](https://www.image-net.org/index.php) validation set.
diff --git a/docs/support_list/model_list_gcu.md b/docs/support_list/model_list_gcu.md
new file mode 100644
index 0000000000..c91c400a1b
--- /dev/null
+++ b/docs/support_list/model_list_gcu.md
@@ -0,0 +1,26 @@
+---
+comments: true
+---
+
+# PaddleX模型列表(燧原 GCU)
+
+PaddleX 内置了多条产线,每条产线都包含了若干模块,每个模块包含若干模型,具体使用哪些模型,您可以根据下边的 benchmark 数据来选择。如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型存储大小,请选择存储大小较小的模型。
+
+## 图像分类模块
+
+
+
+模型名称 |
+Top1 Acc(%) |
+模型存储大小(M) |
+模型下载链接 |
+
+
+
+ResNet50 |
+76.96 |
+90.8 M |
+推理模型/训练模型 |
+
+
+注:以上精度指标为[ImageNet-1k](https://www.image-net.org/index.php)验证集 Top1 Acc。
diff --git a/mkdocs.yml b/mkdocs.yml
index 382067b9d6..6b61e29ec6 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -170,6 +170,7 @@ plugins:
寒武纪 MLU 飞桨安装教程: Cambricon MLU PaddlePaddle Installation Guide
昇腾 NPU 飞桨安装教程: Ascend NPU PaddlePaddle Installation Guide
昆仑 XPU 飞桨安装教程: Kunlun XPU PaddlePaddle Installation Guide
+ 燧原 GCU 飞桨安装教程: Enflame GCU PaddlePaddle Installation Guide
数据标注教程: Data Annotation Tutorials
计算机视觉: Computer Vision
图像分类任务模块: Image Classification Task
@@ -197,6 +198,7 @@ plugins:
PaddleX模型列表(MLU): PaddleX Model List (MLU)
PaddleX模型列表(NPU): PaddleX Model List (NPU)
PaddleX模型列表(XPU): PaddleX Model List (XPU)
+ PaddleX模型列表(GCU): PaddleX Model List (GCU)
产业实践教程&范例: Practical Tutorials & Examples
论文文献信息抽取教程: Document Scene Information Extraction Tutorial
垃圾分类教程: Garbage Classification Tutorial
@@ -371,6 +373,7 @@ nav:
- 寒武纪 MLU 飞桨安装教程: other_devices_support/paddlepaddle_install_MLU.md
- 昇腾 NPU 飞桨安装教程: other_devices_support/paddlepaddle_install_NPU.md
- 昆仑 XPU 飞桨安装教程: other_devices_support/paddlepaddle_install_XPU.md
+ - 燧原 GCU 飞桨安装教程: other_devices_support/paddlepaddle_install_GCU.md
- 数据标注教程:
- 计算机视觉:
- 图像分类任务模块: data_annotations/cv_modules/image_classification.md
@@ -398,6 +401,7 @@ nav:
- PaddleX模型列表(MLU): support_list/model_list_mlu.md
- PaddleX模型列表(NPU): support_list/model_list_npu.md
- PaddleX模型列表(XPU): support_list/model_list_xpu.md
+ - PaddleX模型列表(GCU): support_list/model_list_gcu.md
- 产业实践教程&范例:
- 论文文献信息抽取教程: practical_tutorials/document_scene_information_extraction(layout_detection)_tutorial.md
- 垃圾分类教程: practical_tutorials/image_classification_garbage_tutorial.md
diff --git a/paddlex/inference/components/paddle_predictor/predictor.py b/paddlex/inference/components/paddle_predictor/predictor.py
index 9656ada61b..964f24ad3f 100644
--- a/paddlex/inference/components/paddle_predictor/predictor.py
+++ b/paddlex/inference/components/paddle_predictor/predictor.py
@@ -154,6 +154,26 @@ def _create(self):
pass
elif self.option.device == "mlu":
config.enable_custom_device("mlu")
+ elif self.option.device == "gcu":
+ assert paddle.device.is_compiled_with_custom_device("gcu"), (
+ "Args device cannot be set as gcu while your paddle "
+ "is not compiled with gcu!"
+ )
+ config.enable_custom_device("gcu")
+ from paddle_custom_device.gcu import passes as gcu_passes
+
+ gcu_passes.setUp()
+ name = "PaddleX_" + self.option.model_name
+ if hasattr(config, "enable_new_ir") and self.option.enable_new_ir:
+ config.enable_new_ir(True)
+ config.enable_new_executor(True)
+ kPirGcuPasses = gcu_passes.inference_passes(use_pir=True, name=name)
+ config.enable_custom_passes(kPirGcuPasses, True)
+ else:
+ config.enable_new_ir(False)
+ config.enable_new_executor(False)
+ pass_builder = config.pass_builder()
+ gcu_passes.append_passes_for_legacy_ir(pass_builder, name)
else:
assert self.option.device == "cpu"
config.disable_gpu()
diff --git a/paddlex/inference/utils/pp_option.py b/paddlex/inference/utils/pp_option.py
index 968d866e82..5a8e22b4c2 100644
--- a/paddlex/inference/utils/pp_option.py
+++ b/paddlex/inference/utils/pp_option.py
@@ -31,7 +31,7 @@ class PaddlePredictorOption(object):
"mkldnn",
"mkldnn_bf16",
)
- SUPPORT_DEVICE = ("gpu", "cpu", "npu", "xpu", "mlu", "dcu")
+ SUPPORT_DEVICE = ("gpu", "cpu", "npu", "xpu", "mlu", "dcu", "gcu")
def __init__(self, model_name=None, **kwargs):
super().__init__()
diff --git a/paddlex/modules/base/build_model.py b/paddlex/modules/base/build_model.py
index 0bacbcd34f..d6d9223eae 100644
--- a/paddlex/modules/base/build_model.py
+++ b/paddlex/modules/base/build_model.py
@@ -22,7 +22,7 @@ def build_model(model_name: str, config_path: str = None) -> tuple:
Args:
model_name (str): model name
- device (str): device, such as gpu, cpu, npu, xpu, mlu
+ device (str): device, such as gpu, cpu, npu, xpu, mlu, gcu
config_path (str, optional): path to the PaddleX config yaml file.
Defaults to None, i.e. using the default config file.
diff --git a/paddlex/repo_apis/PaddleDetection_api/instance_seg/config.py b/paddlex/repo_apis/PaddleDetection_api/instance_seg/config.py
index 2f30625e86..35094a0171 100644
--- a/paddlex/repo_apis/PaddleDetection_api/instance_seg/config.py
+++ b/paddlex/repo_apis/PaddleDetection_api/instance_seg/config.py
@@ -270,6 +270,9 @@ def update_device(self, device_type: str):
elif device_type.lower() == "mlu":
self["use_mlu"] = True
self["use_gpu"] = False
+ elif device_type.lower() == "gcu":
+ self["use_gcu"] = True
+ self["use_gpu"] = False
else:
assert device_type.lower() == "cpu"
self["use_gpu"] = False
diff --git a/paddlex/repo_apis/PaddleDetection_api/object_det/config.py b/paddlex/repo_apis/PaddleDetection_api/object_det/config.py
index 0e4c0fd486..462e5e3bfd 100644
--- a/paddlex/repo_apis/PaddleDetection_api/object_det/config.py
+++ b/paddlex/repo_apis/PaddleDetection_api/object_det/config.py
@@ -271,6 +271,9 @@ def update_device(self, device_type: str):
elif device_type.lower() == "mlu":
self["use_mlu"] = True
self["use_gpu"] = False
+ elif device_type.lower() == "gcu":
+ self["use_gcu"] = True
+ self["use_gpu"] = False
else:
assert device_type.lower() == "cpu"
self["use_gpu"] = False
diff --git a/paddlex/repo_apis/PaddleOCR_api/text_rec/config.py b/paddlex/repo_apis/PaddleOCR_api/text_rec/config.py
index b40b805e25..498b744287 100644
--- a/paddlex/repo_apis/PaddleOCR_api/text_rec/config.py
+++ b/paddlex/repo_apis/PaddleOCR_api/text_rec/config.py
@@ -228,6 +228,7 @@ def update_device(self, device: str):
"Global.use_xpu": False,
"Global.use_npu": False,
"Global.use_mlu": False,
+ "Global.use_gcu": False,
}
device_cfg = {
@@ -236,6 +237,7 @@ def update_device(self, device: str):
"xpu": {"Global.use_xpu": True},
"mlu": {"Global.use_mlu": True},
"npu": {"Global.use_npu": True},
+ "gcu": {"Global.use_gcu": True},
}
default_cfg.update(device_cfg[device])
self.update(default_cfg)
diff --git a/paddlex/repo_apis/base/runner.py b/paddlex/repo_apis/base/runner.py
index 3ba1ccff5e..fcdcacdd1a 100644
--- a/paddlex/repo_apis/base/runner.py
+++ b/paddlex/repo_apis/base/runner.py
@@ -205,6 +205,8 @@ def distributed(self, device, ips=None, log_dir=None):
new_env["ASCEND_RT_VISIBLE_DEVICES"] = dev_ids
elif device == "mlu":
new_env["MLU_VISIBLE_DEVICES"] = dev_ids
+ elif device == "gcu":
+ new_env["TOPS_VISIBLE_DEVICES"] = dev_ids
else:
new_env["CUDA_VISIBLE_DEVICES"] = dev_ids
return args, new_env
diff --git a/paddlex/utils/device.py b/paddlex/utils/device.py
index 16ce8ff151..fcd916b65b 100644
--- a/paddlex/utils/device.py
+++ b/paddlex/utils/device.py
@@ -19,7 +19,7 @@
from . import logging
from .errors import raise_unsupported_device_error
-SUPPORTED_DEVICE_TYPE = ["cpu", "gpu", "xpu", "npu", "mlu"]
+SUPPORTED_DEVICE_TYPE = ["cpu", "gpu", "xpu", "npu", "mlu", "gcu"]
def _constr_device(device_type, device_ids):
@@ -77,7 +77,7 @@ def _set(envs):
logging.debug(f"{key} has been set to {val}.")
device_type, device_ids = parse_device(device)
- if device_type.lower() in ["gpu", "xpu", "npu", "mlu"]:
+ if device_type.lower() in ["gpu", "xpu", "npu", "mlu", "gcu"]:
if device_type.lower() == "gpu" and paddle.is_compiled_with_rocm():
envs = {"FLAGS_conv_workspace_size_limit": "2000"}
_set(envs)
@@ -101,3 +101,6 @@ def _set(envs):
if device_type.lower() == "mlu":
envs = {"FLAGS_use_stride_kernel": "0"}
_set(envs)
+ if device_type.lower() == "gcu":
+ envs = {"FLAGS_use_stride_kernel": "0"}
+ _set(envs)