Whitch resolution is good enough?

Home Forums DeepFaceLab Training Whitch resolution is good enough?

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
    Posts
  • #5950
    FrankyM
    Participant

      I work with an GTX 1650 with 4 GB. I train my models with 128. When I don’t use Adabelief I can also use 160 probably higher. Some people use 512 is this so much better? Which resolution for which video resolution should I use. I think 128 works good and is fast. I tested 192 with my APU and it was better but lasts very long. Do you have experiences?

      #7695
      defalafa
      Participant

        don´t waste your time with a 4 gb card it makes no sence, the whole system should match or you will wait 4ever

        at least 6 gb vram , ssd , 16 gb ram and i5 11400f > or better

        i advice 3060 rtx 12gb as the best for starting

        above model size 256 res in face or 320 in full face you can see better closeups.

        below – only make 720p dst files or lower resolution since the model will get blurry

        #7718
        FrankyM
        Participant

          Thank you very much for your advice. I know use a RTX 3050 with 8 GB RAM and Im testing higher resolutions.

          #7730
          genesis1
          Participant

            What would you suggest best set up for x99 6 core cpu, 16gb ram, GTX 1080ti 11gb? Whats the best resolution. Ive been trying 256 and running for 3 days still looks blurry when i test in video. In fact it only looks as good as a 128 that i tested under less time.
            Ive recently read that pretraining can help speed up learning. Is there a good face set to download on this forum?

            #7742
            defalafa
            Participant

              always use a pretrained model – you need > 600k pre-training before start you regular model training, so don´t start from scratch

              go the the model secection use a 256 res model – try 256 DF-UDT F or WF ( face / whole face ) unsing default dims and start with random warp , batch 6 , remember to check using a downloaded model first time – set iter = 0 an pre training = N or the model will stay in pre training mode

              look for something like this

              ==———————- Model Options ———————-==
              == ==
              == resolution: 256 ==
              == face_type: f ==
              == models_opt_on_gpu: True ==
              == archi: df-udt ==
              == ae_dims: 256 ==
              == e_dims: 64 ==
              == d_dims: 64 ==
              == d_mask_dims: 22 ==
              == masked_training: True ==
              == eyes_mouth_prio: False ==
              == uniform_yaw: False ==
              == blur_out_mask: False ==
              == adabelief: True ==
              == lr_dropout: n ==
              == random_warp: True ==
              == random_hsv_power: 0.0 ==
              == true_face_power: 0.0 ==
              == face_style_power: 0.0 ==
              == bg_style_power: 0.0 ==
              == ct_mode: rct ==
              == clipgrad: False ==
              == pretrain: False ==
              == autobackup_hour: 0 ==
              == write_preview_history: False ==
              == target_iter: 0 ==
              == random_src_flip: False ==
              == random_dst_flip: True ==
              == batch_size: 6 ==
              == gan_power: 0.0 ==
              == gan_patch_size: 32 ==
              == gan_dims: 16 ==
              == ==

              #8027
              turnip26
              Participant

                The clue is in the name, resolution. If you use 256, or 320 but your footage is 4K then your face when merged will be blurry because the model has to be upscaled in pixels to match the resolution of the source image. For best results, you need a sharp video where the face in the video is a close match to the resolution of the model, so unless you have modern hardware which supports high-resolution models stick to low res source footage. I would say anything at 256 or less should only be used with 720p.

                The other tip is to start with a pre-trained model using settings which match your footage and on a new project with the same settings start with the same model. Even if the faces in the new project are completely different you’ll be amazed at how quickly the model will update to the new faces and give a decent result.

                #8082
                defalafa
                Participant

                  resolution yes, but it depends – on the face distance , size in clip and the resolution in src faceset

                  you easily can use a hq 512 faceset on 4k dst while no closeups are shown on a 256res face model , just train long enough

                Viewing 7 posts - 1 through 7 (of 7 total)
                • You must be logged in to reply to this topic.