SAED does not work

Home Forums DeepFaceLab Training SAED does not work

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #3191
    Anonymous

      Training with SEAD does not work.
      I did all the steps up to step 6.
      The training with xseg also worked, but SAED does not work.
      I don’t know what I’m doing wrong. Please help me.

      Best regards
      aron

      Here are my parameters:

      Running trainer.

      [new] No saved models found. Enter a name of a new model : head20
      head20

      Model first run.

      Choose one or several GPU idxs (separated by comma).

      [CPU] : CPU
      [0] : NVIDIA GeForce GTX 950M

      [0] Which GPU indexes to choose? : 0
      0

      [0] Autobackup every N hour ( 0..24 ?:help ) :
      0
      [n] Write preview history ( y/n ?:help ) :
      n
      [0] Target iteration : 10
      10
      [n] Flip SRC faces randomly ( y/n ?:help ) :
      n
      [y] Flip DST faces randomly ( y/n ?:help ) :
      y
      [4] Batch_size ( ?:help ) : 2
      2
      [128] Resolution ( 64-640 ?:help ) : 64
      64
      [f] Face type ( h/mf/f/wf/head ?:help ) :
      f
      [liae-ud] AE architecture ( ?:help ) :
      liae-ud
      [256] AutoEncoder dimensions ( 32-1024 ?:help ) :
      256
      [64] Encoder dimensions ( 16-256 ?:help ) :
      64
      [64] Decoder dimensions ( 16-256 ?:help ) :
      64
      [22] Decoder mask dimensions ( 16-256 ?:help ) :
      22
      [n] Eyes and mouth priority ( y/n ?:help ) :
      n
      [n] Uniform yaw distribution of samples ( y/n ?:help ) :
      n
      [n] Blur out mask ( y/n ?:help ) :
      n
      [y] Place models and optimizer on GPU ( y/n ?:help ) :
      y
      [y] Use AdaBelief optimizer? ( y/n ?:help ) :
      y
      [n] Use learning rate dropout ( n/y/cpu ?:help ) :
      n
      [y] Enable random warp of samples ( y/n ?:help ) :
      y
      [0.0] Random hue/saturation/light intensity ( 0.0 .. 0.3 ?:help ) :
      0.0
      [0.0] GAN power ( 0.0 .. 5.0 ?:help ) :
      0.0
      [0.0] Face style power ( 0.0..100.0 ?:help ) :
      0.0
      [0.0] Background style power ( 0.0..100.0 ?:help ) :
      0.0
      [none] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot ?:help ) :
      none
      [n] Enable gradient clipping ( y/n ?:help ) :
      n
      [n] Enable pretraining mode ( y/n ?:help ) :
      n
      Initializing models: 100%|###############################################################| 5/5 [00:04<00:00, 1.23it/s]
      Loading samples: 100%|###############################################################| 461/461 [00:40<00:00, 11.50it/s]
      Loading samples: 100%|###############################################################| 399/399 [00:38<00:00, 10.44it/s]
      ================== Model Summary ===================
      == ==
      == Model name: head20_SAEHD ==
      == ==
      == Current iteration: 0 ==
      == ==
      ==—————- Model Options —————–==
      == ==
      == resolution: 64 ==
      == face_type: f ==
      == models_opt_on_gpu: True ==
      == archi: liae-ud ==
      == ae_dims: 256 ==
      == e_dims: 64 ==
      == d_dims: 64 ==
      == d_mask_dims: 22 ==
      == masked_training: True ==
      == eyes_mouth_prio: False ==
      == uniform_yaw: False ==
      == blur_out_mask: False ==
      == adabelief: True ==
      == lr_dropout: n ==
      == random_warp: True ==
      == random_hsv_power: 0.0 ==
      == true_face_power: 0.0 ==
      == face_style_power: 0.0 ==
      == bg_style_power: 0.0 ==
      == ct_mode: none ==
      == clipgrad: False ==
      == pretrain: False ==
      == autobackup_hour: 0 ==
      == write_preview_history: False ==
      == target_iter: 10 ==
      == random_src_flip: False ==
      == random_dst_flip: True ==
      == batch_size: 2 ==
      == gan_power: 0.0 ==
      == gan_patch_size: 8 ==
      == gan_dims: 16 ==
      == ==
      ==—————— Running On ——————==
      == ==
      == Device index: 0 ==
      == Name: NVIDIA GeForce GTX 950M ==
      == VRAM: 1.35GB ==
      == ==
      ====================================================
      Starting. Target iteration: 10. Press “Enter” to stop training and save model.

      Trying to do the first iteration. If an error occurs, reduce the model parameters.

      !!!
      Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly.

      View post on imgur.com


      !!!
      You are training the model from scratch. It is strongly recommended to use a pretrained model to speed up the training and improve the quality.

      #3225
      deepfakery
      Keymaster

        Are you getting an OOM error?
        Your card doesn’t have much VRAM so you need to disable some things. Try either or both of these:

        [y] Place models and optimizer on GPU ( y/n ?:help ) :
        n
        [y] Use AdaBelief optimizer? ( y/n ?:help ) :
        n

        Might also try increasing your page file size.

        #3564
        Anonymous

          Thanks for your answer.
          I implemented the first two suggestions, but still got an error message.
          When I try train Quick 96, the error message also appears.
          What do you mean with the last point:”Might also try increasing your page file size.”
          Can you explain that in a bit more detail?
          Best regards
          aron

          Running trainer.

          [new] No saved models found. Enter a name of a new model :
          new

          Model first run.

          Choose one or several GPU idxs (separated by comma).

          [CPU] : CPU
          [0] : NVIDIA GeForce GTX 950M

          [0] Which GPU indexes to choose? : 0
          0

          [0] Autobackup every N hour ( 0..24 ?:help ) :
          0
          [n] Write preview history ( y/n ?:help ) :
          n
          [10] Target iteration :
          10
          [n] Flip SRC faces randomly ( y/n ?:help ) :
          n
          [y] Flip DST faces randomly ( y/n ?:help ) :
          y
          [2] Batch_size ( ?:help ) :
          2
          [64] Resolution ( 64-640 ?:help ) :
          64
          [f] Face type ( h/mf/f/wf/head ?:help ) :
          f
          [liae-ud] AE architecture ( ?:help ) :
          liae-ud
          [256] AutoEncoder dimensions ( 32-1024 ?:help ) :
          256
          [64] Encoder dimensions ( 16-256 ?:help ) :
          64
          [64] Decoder dimensions ( 16-256 ?:help ) :
          64
          [22] Decoder mask dimensions ( 16-256 ?:help ) :
          22
          [n] Eyes and mouth priority ( y/n ?:help ) :
          n
          [n] Uniform yaw distribution of samples ( y/n ?:help ) :
          n
          [n] Blur out mask ( y/n ?:help ) :
          n
          [n] Place models and optimizer on GPU ( y/n ?:help ) :
          n
          [n] Use AdaBelief optimizer? ( y/n ?:help ) :
          n
          [n] Use learning rate dropout ( n/y/cpu ?:help ) :
          n
          [y] Enable random warp of samples ( y/n ?:help ) :
          y
          [0.0] Random hue/saturation/light intensity ( 0.0 .. 0.3 ?:help ) :
          0.0
          [0.0] GAN power ( 0.0 .. 5.0 ?:help ) :
          0.0
          [0.0] Face style power ( 0.0..100.0 ?:help ) :
          0.0
          [0.0] Background style power ( 0.0..100.0 ?:help ) :
          0.0
          [none] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot ?:help ) :
          none
          [n] Enable gradient clipping ( y/n ?:help ) :
          n
          [n] Enable pretraining mode ( y/n ?:help ) :
          n
          Initializing models: 100%|###############################################################| 5/5 [00:02<00:00, 1.72it/s]
          Loading samples: 100%|###############################################################| 461/461 [00:41<00:00, 11.06it/s]
          Loading samples: 100%|###############################################################| 399/399 [00:38<00:00, 10.41it/s]
          Process Process-13:
          Traceback (most recent call last):
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py”, line 134, in batch_func
          x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleProcessor.py”, line 145, in process
          img = get_eyes_mouth_mask()*mask
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleProcessor.py”, line 80, in get_eyes_mouth_mask
          return np.clip(mask, 0, 1)
          File “<__array_function__ internals>”, line 6, in clip
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py”, line 2097, in clip
          return _wrapfunc(a, ‘clip’, a_min, a_max, out=out, **kwargs)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py”, line 58, in _wrapfunc
          return bound(*args, **kwds)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py”, line 141, in _clip
          um.clip, a, min, max, out=out, casting=casting, **kwargs)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py”, line 94, in _clip_dep_invoke_with_casting
          return ufunc(*args, out=out, **kwargs)
          MemoryError: Unable to allocate 16.0 MiB for an array with shape (2048, 2048, 1) and data type float32

          During handling of the above exception, another exception occurred:

          Traceback (most recent call last):
          File “multiprocessing\process.py”, line 258, in _bootstrap
          File “multiprocessing\process.py”, line 93, in run
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\joblib\SubprocessGenerator.py”, line 54, in process_func
          gen_data = next (self.generator_func)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py”, line 136, in batch_func
          raise Exception (“Exception occured in sample %s. Error: %s” % (sample.filename, traceback.format_exc() ) )
          Exception: Exception occured in sample C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\workspace\data_dst\aligned\00081_0.jpg. Error: Traceback (most recent call last):
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py”, line 134, in batch_func
          x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleProcessor.py”, line 145, in process
          img = get_eyes_mouth_mask()*mask
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\samplelib\SampleProcessor.py”, line 80, in get_eyes_mouth_mask
          return np.clip(mask, 0, 1)
          File “<__array_function__ internals>”, line 6, in clip
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py”, line 2097, in clip
          return _wrapfunc(a, ‘clip’, a_min, a_max, out=out, **kwargs)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py”, line 58, in _wrapfunc
          return bound(*args, **kwds)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py”, line 141, in _clip
          um.clip, a, min, max, out=out, casting=casting, **kwargs)
          File “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py”, line 94, in _clip_dep_invoke_with_casting
          return ufunc(*args, out=out, **kwargs)
          MemoryError: Unable to allocate 16.0 MiB for an array with shape (2048, 2048, 1) and data type float32

          Drücken Sie eine beliebige Taste . . .

          #3567
          deepfakery
          Keymaster

            Hey Aron,
            I just posted a guide which might help. It has some recommendations for DeepFaceLab system optimization.

            DeepFaceLab 2.0 Guide

            You card doesn’t have much VRAM available which is going to be the major problem, even with really low settings.
            Check out this table: https://www.deepfakevfx.com/guides/model-training-settings/
            You might need to disable the -U and/or -D model options. Try these settings: LIAE/112/256/64/64/22

            Also were you able to run Quick96 at all?

            #3569
            deepfakery
            Keymaster

              I just noticed something else in the path to your DFL:
              “C:\Users\aronk\Desktop\11.20. DeepFaceLab_NVIDIA_up_to_RTX2080Ti”
              Try having it in your C: directory not the desktop, and remove the space in the folder name

              #3570
              Anonymous

                Hey deepfakery,

                thanks for your reply.
                I’ll deal with the guide. I moved DFL to C.
                Regarding your question: “Also were you able to run Quick96 at all?” Unfortunately, no.

                Best regards
                aron

              Viewing 6 posts - 1 through 6 (of 6 total)
              • You must be logged in to reply to this topic.