-
Notifications
You must be signed in to change notification settings - Fork 144
Description
Hello,
I ran your code and it worked and created a model for me, but only in the h5 format, the model is not created in the onnx format. Apparently, I have a weak video card, it does not support calculations... This is what happened at the end of training the model:
2025-01-31 18:26:02.160816: F tensorflow/stream_executor/cuda/cuda_driver.cc:345] Check failed: CUDA_SUCCESS == cuDevicePrimaryCtxGetState(device, &former_primary_context_flags, &former_primary_context_is_active) (0 vs. 303)
But when I tried to check the model, I ran the file, an error occurred:
(venv) G:\testPyton>python inferenceModel.py
Traceback (most recent call last):
File "G:\testPyton\inferenceModel.py", line 31, in
model = ImageToWordModel(model_path=configs.model_path, char_list=configs.vocab)
File "G:\testPyton\inferenceModel.py", line 10, in init
super().init(*args, **kwargs)
File "G:\testPyton\venv\lib\site-packages\mltu\inferenceModel.py", line 51, in init
raise Exception(f"Model path ({self.model_path}) does not exist")
Exception: Model path (Models/02_captcha_to_text/202501311820\model.onnx) does not exist
Please tell me how to run the model in h5 format? Otherwise I can’t do anything with it, neither check nor use it...
tensorflow 2.10.0
mltu 1.2.5
onnx 1.17.0