Home › Forums › DeepFaceLab › Training › Pretraining Problem › Reply To: Pretraining Problem
I deleted the previous model and made a new model. I tried to do SAEHD training again, but I got the same error again. I can’t find out why I’m getting this error. I’m about to go crazy…
[n] Enable pretraining mode ( y/n ?:help ) : y
Initializing models: 80%|##################################################4 | 4/5 [01:31<00:22, 22.88s/it]
Error: OOM when allocating tensor with shape[131072,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node src_dst_opt/ms_inter_AB/dense1/weight_0/Assign (defined at C:\Users\Ersin\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series_build_11_20_2021\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py:37) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn’t available when running in Eager mode.