-
Notifications
You must be signed in to change notification settings - Fork 31
Open
Description
With an unmodified requirements.txt
I get import problems related to pytorch-lightning
trying to import things from torchmetrics
. Then if I downgrade pytorch-lightning
to 0.6.0
as suggested in other discussions, I get
$ echo "Minulla on koira." | python3 tnpp_parse.py --conf models_fi_tdt_dia/pipelines.yaml parse_plaintext
/scratch/clarin/hardwick/Turku-neural-parser-pipeline/venv-tnpp/lib64/python3.9/site-packages/pytorch_lightning/core/decorators.py:65: LightningDeprecationWarning: The `@auto_move_data` decorator is deprecated in v1.3 and will be removed in v1.5. Please use `trainer.predict` instead for inference. The decorator was applied to `predict`
rank_zero_deprecation(
INFO:root:Loading model from /scratch/clarin/hardwick/Turku-neural-parser-pipeline/models_fi_tdt_dia/Tagger/best.ckpt
/scratch/clarin/hardwick/Turku-neural-parser-pipeline/venv-tnpp/lib64/python3.9/site-packages/sklearn/base.py:288: UserWarning: Trying to unpickle estimator LabelEncoder from version 0.24.2 when using version 1.2.0. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
Lemmatizer device: cpu / -1
Waiting for input
Feeding final batch
Some weights of the model checkpoint at TurkuNLP/bert-base-finnish-cased-v1 were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at TurkuNLP/bert-base-finnish-cased-v1 were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Process ForkProcess-6:
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib64/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/tnparser/lemmatizer_mod.py", line 234, in launch
lemmatizer=LemmatizerWrapper(args)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/tnparser/lemmatizer_mod.py", line 224, in __init__
self.lemmatizer_model.init_model(args)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/tnparser/lemmatizer_mod.py", line 60, in init_model
self.translator = self.build_my_translator(args.model, self.f_output, use_gpu=use_gpu, gpu_device=device, beam_size=args.beam_size, max_length=args.max_length)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/tnparser/lemmatizer_mod.py", line 85, in build_my_translator
fields, model, model_opt = self.load_model(model_name, use_gpu=use_gpu, gpu_device=gpu_device)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/tnparser/lemmatizer_mod.py", line 68, in load_model
checkpoint = torch.load(model, map_location=lambda storage, loc: storage)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/venv-tnpp/lib64/python3.9/site-packages/torch/serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/venv-tnpp/lib64/python3.9/site-packages/torch/serialization.py", line 1131, in _load
result = unpickler.load()
File "/scratch/clarin/hardwick/Turku-neural-parser-pipeline/venv-tnpp/lib64/python3.9/site-packages/torch/serialization.py", line 1124, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'onmt.inputters.text_dataset'
Error: pipeline stage died with exit code 256: lemmatizer_mod --model /scratch/clarin/hardwick/Turku-neural-parser-pipeline/models_fi_tdt_dia/Lemmatizer/lemmatizer.pt
Error: pipeline stage died with exit code 15: bert512_mod --vocabfile TurkuNLP/bert-base-finnish-cased-v1 --max 400
Exception ignored in: <Finalize object, dead>
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/util.py", line 224, in __call__
File "/usr/lib64/python3.9/multiprocessing/queues.py", line 198, in _finalize_join
TypeError: 'NoneType' object is not callable
mrgransky
Metadata
Metadata
Assignees
Labels
No labels