Hello there!
I was testing aroung with DeepFaceLab SaeHD. First I created a pretrained model with around 150.000k samples.
And then I wanted to use this model to train on a source scene and a destination scene.
While in training mode everything worked fine, in no-training mode/normal mode it is not trying to understand faces by recreating the faces. The area where the blurred images of the program trying to understand the face occurs only consists of yellow, red or white images. Like the entire frame is just one tone of color. The first few images seemed like it was trying to detect faces, after that the image turned red and yellow until after a few samples the entire image was yellow.
Is this a common bug? What can I do against it?
Link to see: https://youtu.be/OiVa4ezAeq4