Home › Forums › DeepFaceLab › Training › Pretraining Problem
- This topic has 3 replies, 3 voices, and was last updated 10 months ago by
mrthong.
-
AuthorPosts
-
November 1, 2023 at 1:29 am #9099
Ismail111
ParticipantHi guys! I’ve never created a video with Deepfacelab before. First of all, I heard that if I do pretraining, it will be faster and easier every time I make a df video. When I open the Deepfacelab folder, I directly run the 6) train SAEHD file. But I get an error at the last part because of the values I entered.
My System Features:
My Graphics Card: RTX 3060
11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 2.30 GHz
RAM: 16GBCan you tell me what path I should follow and what values I should enter?
November 3, 2023 at 7:20 am #9102
deepfakeclubParticipanti always start with the default value’s
also if you are a beginner i would not reccomend trying to do all the complicated stuff at once
however this is what i would reccomend
train using the default values
delete the model that is already in the model folder
create a completely new one or import a pretrained model that you can find somewhere
this site offers really good comunity made models
also if this didnt help
check your workspace folder and see if everything is correct in there
November 4, 2023 at 5:18 pm #9103Ismail111
ParticipantI deleted the previous model and made a new model. I tried to do SAEHD training again, but I got the same error again. I can’t find out why I’m getting this error. I’m about to go crazy…
[n] Enable pretraining mode ( y/n ?:help ) : y
Initializing models: 80%|##################################################4 | 4/5 [01:31<00:22, 22.88s/it]
Error: OOM when allocating tensor with shape[131072,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node src_dst_opt/ms_inter_AB/dense1/weight_0/Assign (defined at C:\Users\Ersin\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series_build_11_20_2021\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:37) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn’t available when running in Eager mode.December 27, 2024 at 11:07 am #10304mrthong
ParticipantLower your batch size
-
AuthorPosts
- You must be logged in to reply to this topic.
