1
1
# Libtorch-python
2
2
3
3
## Export the model
4
+
4
5
### Install [ modelscope and funasr] ( https://github.com/alibaba-damo-academy/FunASR#installation )
5
6
6
7
``` shell
@@ -18,14 +19,16 @@ pip install onnx onnxruntime # Optional, for onnx quantization
18
19
python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch --quantize True
19
20
```
20
21
21
- ## Install the ` funasr_torch ` .
22
-
22
+ ## Install the ` funasr_torch `
23
+
23
24
install from pip
25
+
24
26
``` shell
25
27
pip install -U funasr_torch
26
28
# For the users in China, you could install with the command:
27
29
# pip install -U funasr_torch -i https://mirror.sjtu.edu.cn/pypi/web/simple
28
30
```
31
+
29
32
or install from source code
30
33
31
34
``` shell
@@ -36,11 +39,13 @@ pip install -e ./
36
39
# pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple
37
40
```
38
41
39
- ## Run the demo.
42
+ ## Run the demo
43
+
40
44
- Model_dir: the model path, which contains ` model.torchscripts ` , ` config.yaml ` , ` am.mvn ` .
41
45
- Input: wav formt file, support formats: ` str, np.ndarray, List[str] `
42
46
- Output: ` List[str] ` : recognition result.
43
47
- Example:
48
+
44
49
``` python
45
50
from funasr_torch import Paraformer
46
51
@@ -55,7 +60,7 @@ pip install -e ./
55
60
56
61
# # Performance benchmark
57
62
58
- Please ref to [benchmark](https:// github.com/ alibaba- damo- academy/ FunASR/ blob/ main/ funasr / runtime/ python / benchmark_libtorch.md)
63
+ Please ref to [benchmark](https:// github.com/ alibaba- damo- academy/ FunASR/ blob/ main/ runtime/ docs / benchmark_libtorch.md)
59
64
60
65
# # Speed
61
66
@@ -70,4 +75,5 @@ Test [wav, 5.53s, 100 times avg.](https://isv-data.oss-cn-hangzhou.aliyuncs.com/
70
75
| Onnx | 0.038 |
71
76
72
77
# # Acknowledge
78
+
73
79
This project is maintained by [FunASR community](https:// github.com/ alibaba- damo- academy/ FunASR).
0 commit comments