TheLastBen (@__theben) 's Twitter Profile
TheLastBen

@__theben

AI Consultant
fast-stable-diffusion

ID: 53741184

linkhttps://github.com/TheLastBen/fast-stable-diffusion calendar_today04-07-2009 19:26:43

467 Tweet

1,1K Followers

189 Following

TheLastBen (@__theben) 's Twitter Profile Photo

AI in 2024 is Bitcoin in 2013, whatever you're doing keep doing it and you'll have a chance to stand with the giants in the future, ups and downs are just noise, bitcoin crashing from 1$ to 75 cents was just noise, same with AI, don't listen to the noise.

Emm (@emmanuel_2m) 's Twitter Profile Photo

It works on character models too. Workflow: 1. #Train a model (15 min + 1 hr processing time) 2. #Sketch (5 min in the Live Canvas) 3. #Enhance to polish the render (1 min) You can get ultra-consistent characters, exactly in the pose you want, in minutes, for less than a dollar.

TheLastBen (@__theben) 's Twitter Profile Photo

Davy Jones, No LoRA vs 350 steps LoRA, Flux. Extremely low quality 7 pics dataset resized (not upscaled) up to 1024 for more stable training.

Davy Jones, No LoRA vs 350 steps LoRA, Flux.
Extremely low quality 7 pics dataset resized (not upscaled) up to 1024 for more stable training.
TheLastBen (@__theben) 's Twitter Profile Photo

Existing tokens greatly help reduce the training time, the joker and the hound were the training tokens for these flux LoRAs, between 500-700 steps, and about 10-20 minutes training time on A100-80G

Existing tokens greatly help reduce the training time, the joker and the hound were the training tokens for these flux LoRAs, between 500-700 steps, and about 10-20 minutes training time on A100-80G
TheLastBen (@__theben) 's Twitter Profile Photo

For powerful models like Flux, training LoRAs shouldn't be straightforward, taking advantage of the power of the model gives a lot of room for optimization and fine-grained LoRA training, that includes targeting specific blocks at varied strengths, it saves time, memory & quality

For powerful models like Flux, training LoRAs shouldn't be straightforward, taking advantage of the power of the model gives a lot of room for optimization and fine-grained LoRA training, that includes targeting specific blocks at varied strengths, it saves time, memory & quality