--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-I2V-14B-480P - Wan-AI/Wan2.1-I2V-14B-480P-Diffusers pipeline_tag: image-to-video tags: - text-to-image - lora - diffusers - template:diffusion-lora - image-to-video widget: - text: >- A man with dark hair wearing an orange jacket is shown in front of a brown background. He starts by looking into the camera and smiling and then he starts l4a6ing laughing. He's still l4a6ing laughing at the end. output: url: example_videos/1.mp4 - text: >- A man with dark hair is looking into the camera smiling and then he starts l4a6ing laughing. He's still l4a6ing laughing at the end. output: url: example_videos/2.mp4 - text: >- A man with gray hair and a gray t-shirt is smiling at the camera. He then begins l4a6ing laughing. output: url: example_videos/3.mp4 ---
This LoRA is trained on the Wan2.1 14B I2V 480p model and allows you to make yourself or your friends laugh!
The key trigger phrase is: l4a6ing laughing
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!