i am trying to set up a workstation for Deepfake.
I got an old HP server ML350 G9 with dual xeon (48 cores) and 200gig of RAM and an old GTX970 running ubuntu 20.04
On the other hand i got a PC with an old I7 4770 but a RTX3090 running win10.
I cannot install the 3090 into the server because the power supply. (i also got a 780ti and a 980ti, but probalby both are to heavy for the powersupply of the server).
My question is: would it be better to run on CPUs with 48 cores and lot of RAM (not sure if the GPU here would make the difference) or
running from the win10 PC with the 3090 but an outdated i7 4770. (i could go linux too on that machine if needed).
I am not looking to max out parameters, the usual 320 would be ok but i am not sure DeepFake would benefit having lot of ram and a lot of cores.
My preference would be to go with the server because i can let it run for days if needed.
First i’d test the Xeon system to make sure the CPU is able to run DFL. From my understanding it needs to have the AVX instruction set. My initial thought is that despite being a dual Xeon system its going to be much slower than your GPU which has 24GB VRAM assumedly. I believe you will find it annoyingly slow to train some 320 res model on a CPU. I doubt the old i7 is going to hold you back more than a low/no GPU scenario.
Without knowing your current experience level I think the best idea is to start with the i7 system, make a few deepfakes, and see if this something you want to pursue. At that point it might be a good idea to slap the 3090 and a new PSU into the Xeon server. I’m imagining you have a giant number of PCIe lanes which could support multiple cards on that system. If you can’t spare the 3090 or get another GPU I’d look into selling the workstation and upgrading the PC. Basically you’re going to need to make a sacrifice or an upgrade if you want to continue to make professional level deepfakes AND have your desktop available for gaming, editing, or whatever.
BTW having a GPU in the Xeon system won’t matter if you’re just training on CPU. The GPU will be unused.