Home › Forums › DeepFaceLab › Training › There are pretrained Models available. Why should I use them?
- This topic has 11 replies, 7 voices, and was last updated 1 year, 4 months ago by dotrungtu90.
-
AuthorPosts
-
November 14, 2022 at 5:55 am #5949FrankyMParticipant
Are the pretrained Models helpful. Until now I dont use them.
November 22, 2022 at 6:18 pm #6953deepfakeryKeymasterWhen a model is first started it is a blank slate; it has no idea what the data will be. The pretrained models have already learned a variety of face shapes, alignments, masks, color information, etc. When you use a pretrained model you are essentially shortcutting the process by starting with a template for lack of a better term. Someone else has done part of the work for you. Choose a pretrained model with settings that will work well on your system and use that for the base of your deepfakes.
Another advantage of using a pretrained model is that you can choose a model with very high settings and still get a result in a reasonable amount of time. For instance, if the pretrained model will only run at batch size 4 on your system, normally it would take weeks or even months for you to train the model. However by using the pretrained model you can again shortcut the process, still running at a low batch size, but having the pretraining done by someone with a much more powerful system and higher batch size.
December 6, 2022 at 7:23 am #7591FrankyMParticipantThank you for your Answer. I found an how to use Video on the Internet for the Pretraining models. The explanation helped me to use it. I think you are right it works faster.
If possible i would like to have another question. I used the pretraining models of your Webside. 128 f with AdaB. The results were good after 70.000 or 100.000 Iteration. I was happy. Now I have upgraded my PC with a RTX 3050. I tried 192 f without AdaB in the pretrained model and AdaB True in my project. I’m not shure, lasts it much longer to train in the higher resolution? The result after 100.000 Iterations was not good. Very unsharp. I now try again the same Projekt with 128 again.December 8, 2022 at 5:08 am #7597dvdfakfyParticipantWhere did you get a pretrainded model? how to make it a pretrained model?
December 9, 2022 at 10:02 am #7600FrankyMParticipantLook under the Topic download on this side. There you can download a pretrained model. Copy the files in the folder model in your DFL environment. Don’t forget to deactivate the pretraining mode under the options. Its the last option.
December 22, 2022 at 1:25 pm #7657deepfakeryKeymasterThere are some things to consider besides resolution, for instance the AE,E, and D dims. IF these are higher then it will take a more powerful GPU / longer time to train. Keep in mind that going from 128 to 192 is already a 50% increase in resolution which will translate to significantly more VRAM usage.
Also I’m not sure about the effect of enabling AdaBelief after pretraining. I think the model will need to adapt, therefore needing more training time. You might try continuing pretraining with AdaBelief enabled before using the model to see if that gives a better result.
December 24, 2022 at 8:29 am #7668FrankyMParticipantThats not easy to decide. I have good results with Resolution 128 and the Pretraining Model from IPEROV when I train 120.000 till 200.000 Iterations, also with AdaBelief. When I use higher Resolution ex. 192 or 224 with other Pretraining Models from your Page I think the result is not good. Maybe I try without AdaBelief or train from the scratch…
December 28, 2022 at 3:48 pm #7696defalafaParticipantcheck if the page file is on auto setting and you have enough space
use DF models instead of LIAE – need less resources
resolution over dims …. always use higher resolution not the higher dims
setup :
models_opt_on_gpu: False
batch size 4 – if thats not working – model is to heavy for your system
February 8, 2023 at 9:57 am #7930Dracor1anParticipantHi!
I am new in DFL community, so here is my stupid question
How can I disable pretrained mode in downloaded model without running train scripts?
February 15, 2023 at 9:45 am #7939defalafaParticipantjust disable pre training mode = FALSE while training your desired SRC/DST using 6) train SAEHD
just once , it will stay this way
March 2, 2023 at 7:27 am #8028turnip26ParticipantHere’s a quick video I did showing the benefits of using a pre-trained model. The differences are clear even after just 1000 iterations.
September 7, 2023 at 7:12 pm #8969dotrungtu90ParticipantLàm cách nào để sử dụng mô hình được huấn luyện trước đó (SAEHD)?
Tôi sẽ đánh giá cao nếu bạn có thể cung cấp cho tôi nguồn video hướng dẫn cách thực hiện?
cám ơn! -
AuthorPosts
- You must be logged in to reply to this topic.