Earlier this week I was sampling some Anime filters on a few Ai imaging software, except I wanted to put 3D porn models into them. I was just curious to see how it would've turned out.
So I took this one, made with anime like features.
And the Ai made this. ( 6 different types, Just showing 1 )
I was astonished, because this is more than just a simple cell shade. Arguably it looks better than when I cell shade things. It's that added gloss on the lips. Maybe also how uniform that blush and other subtle highlights on the skin are. Whoever can make something like this by hand is a machine.
So I thought to myself, yes, I can make a fine 3D animation, then I can batch render this frame by frame with the Ai and make something that looks dope, like shit that looks more insane than Ghost in the Shell II. We don't see this type of finish in 2D animation, maybe not even 3D.
The Ai has a problem when it comes to making video though, or batching out animated images however you want to look at it. The Ai will need to get better at doing that before it can be implemented that way, but we'll surely be seeing it being used very soon.
The best solution would be a software called EbSynth.
Some basic photoshop skills is required, then you're good to go.
While this would yield interesting results for my side project, I wouldn't get the gloss to move right as my characters move.
I do believe though, EbSynth is really good for deep faking.
Here's someone using images generated from Stable Diffusion then applied to EbSynth. I don't think EbSynth cares where the output comes from, hence the above examples where the user simply painted a single frame while the software manages to apply it to the following images. Then the images are made into a video again, so semi manual procedure. Easy stuff.
EbSynth is free and we can run it on our PC. No servers.
I'm going to look into Stable Diffusion. I find most of these Ai comes with paywalls, and I don't blame them. They're using mining rigs to process these things, and it has to be done like a business or else it'll end up being an expensive hobby.
In the coming period we will release optimized versions of this model along with other variants and architectures with improved performance and quality. We will also release optimisations to allow this to work on AMD, Macbook M1/M2 and other chipsets. Currently NVIDIA chips are recommended.
We will also release additional tools to help maximize the impact and reduce potential adverse outcomes from these tools with amazing partners to be announced in the coming weeks.
This technology has tremendous potential to transform the way we communicate and we look forward to building a happier, more communicative and creative future with you all.
Please contact
[email protected] with any queries, much more soon.
I have 3 Nvidia GPU's though the Titan needs new Thermal pads.
If I get around to experimenting with it sooner than I can replace the pads, then I'll run this on a GTX 750 ti though it only has 650 cuda cores. I still wouldn't mind if it took a day to complete my first experiment. For now I don't understand how it works. It has models and such we can download, but I don't presume those are 3D models, it could be. No idea.