AI Porn is here, Create and Fap
Try Free 🔞
x
  • Bookmarks are currently disabled, see here for info.
  • We're aware of issues with Turbo vids, we're waiting for the right people to come online and fix it.

Guide Tools Fake Wan VACE Video Nudify Workflow Tutorial

singingminer

Bathwater Drinker
Jan 5, 2023
77
10,810
Wan VACE Video Nudify Workflow Tutorial

Overview​

This method uses the Wan VACE model to create a nude version of a given video.

V2 UPDATE: I created and uploaded a new workflow. This new method utilizes the GroundingDinoSAMSegment node and MatAnyone node to automatically recognize clothing and video mask it. The whole process now takes place in a single workflow file. I also separated the interpolate and upscale method as many people ended up removing it anyway.

Example output below:

Download the two JSON workflow files here: https://gofile.io/d/bqP99R
  • Main workflow: lsmtatk_gguf_wan_inpaint_v2.json
  • Simple nudify workflow: lsmtatk_upscale_and_interpolate.json

Workflows Overview
Workflow NamePurpose
lsmtatk_gguf_wan_inpaint_v2.jsonFull pipeline: Creates reference image, masks clothing, runs Wan VACE
lsmtatk_upscale_and_interpolate.jsonSimple upscale and interpolate workflow

Main Workflow Organization:
  • User Input Section (Blue) - Where you'll make most of your adjustments
  • Logic Section (Red) - Handles processing automatically (rarely needs modification for newer users)
Image below shows the blue and red regions described above:

Screenshot-2025-07-07-10522406b75b26ac76cba9.md.jpg
You will see these Fast Groups Bypasser nodes throughout the workflow, this is to activate and deactivate certain groups. In the new V2 workflow, we won't be using these as often as V1, but it's good to know they're here.

Screenshot-2025-07-07-1054155118aab67102ac2c.jpg

Model Downloads:
Wan VACE GGUF: https://huggingface.co/QuantStack/Wan2.1_14B_VACE-GGUF/tree/main
Wan Self-Forcing Lora: https://civitai.com/models/1585622/causvid-accvid-lora-massive-speed-up-for-wan21-made-by-kijai
Wan Female Genitalia Lora: https://civitai.com/models/1434650/nsfwfemale-genitals-helper-for-wan-t2vi2v?modelVersionId=1621698
Wan General NSFW Model: https://civitai.com/models/1307155/wan-general-nsfw-model-fixed?modelVersionId=1475095
Wan Clip GGUF: https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main
Wan VAE: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/vae
Upscaler Model: https://openmodeldb.info/models/4x-LSDIR

SDXL_Pony_Realism: https://civitai.com/models/372465/pony-realism?modelVersionId=1920896
SDXL_Controlnet: https://huggingface.co/xinsir/controlnet-union-sdxl-1.0/blob/main/diffusion_pytorch_model.safetensors

MatAnyone_Kytra: https://huggingface.co/Mothersuperior/ComfyUI_MatAnyone_Kytra/blob/main/matanyone.pth

Important Note about GGUF Models:
GGUF models are listed based on Q# (eg. Q4, Q5, Q8). Basically, the lower number after the Q, the easier it is to run that model. However, the quality of the model also goes down. Based on your GPU and RAM, you'll need to find which model works best with your setup.

Exact File Locations:
ComfyUI/models/checkpoints/sdxl_pony_realism.safetensors
ComfyUI/models/loras/wan_general_nsfw.safetensors
ComfyUI/models/loras/wan_self_forcing_lora.safetensors
ComfyUI/models/loras/wan_female_genitalia_lora.safetensors
ComfyUI/models/vae/wan_2.1_vae.safetensors
ComfyUI/models/unet/YOUR_WAN_VACE_GGUF_MODEL.gguf
ComfyUI/models/clip/YOUR_WAN_CLIP_GGUF_MODEL.gguf
ComfyUI/models/upscale_models/4x-LSDIR.pth
ComfyUI/models/controlnet/diffusion_pytorch_model.safetensors

NOTE: If you don't see ComfyUI_MatAnyone_Kytra in custom_nodes, you'll need to download all custom nodes within the ComfyUI interface beforehand. Specifically the ComfyUI_MatAnyone_Kytra node.
ComfyUI/custom_nodes/ComfyUI_MatAnyone_Kytra/model



Initial Setup​


Configuration:
  1. Configure your settings as desired
  2. Load your WAN VACE, CLIP, and VAE models into their respective slots
  3. Set resolution to 480x832 or 832x480 for optimal processing
  4. Configure video_fps and length.
    1. Both values will directly determine how much of your video you will cover. For example, if you set the frame rate at 12, with length at 48, the result will be 4 seconds total.
  5. Configure Skip_First_frame if you want to skip any initial parts of the input video. These are affected by the video_fps similar with the length setting.
  6. Configure the Mask Objects setting to fit your desired objects to mask out of the video. It's currently set up to remove clothing.



Running the Workflow​

1. Upload your video in the Input Video node group.
2. Run the Workflow.
4. That should be all. The output contains both the generated video and a side-by-side comparison of the generated video and input video.
5. I included an interpolate_and_upscale workflow as well if you would like to do that. It's a very simple workflow.



Edits to Tutorial:​

1. Changed https://civitai.com/models/245423/juggerxlinpaint into https://civitai.com/models/372465/pony-realism?modelVersionId=1920896 from the Models section for the SDXL inpaints. Reason is due to splotchiness from the juggerxlinpaint model, where the pony-realism model is a lot better with. ControlNet strength needs to be raised from 0.15 to 0.30 with this change.

2. Created lsmtatk_gguf_wan_inpaint_v1.1.json which improves interpolation performance. The process still uses a ton of resources so feel free to remove the interpolation if you don't have a strong computer.

3. Add more images to better clarify steps. I also added a clearer description in the tutorial overview.

4. Created and added lsmtatk_gguf_wan_inpaint_v2.json. Uploaded old workflows and old guide in gofile.
 
Last edited:
Post in Neekolul Fakes Thread

This stuff is nuts. This specifically is by no means solely made with this workflow, but the core VACE inpaint stuff is pretty exceptional. If they can get video length up significantly i.e. the FramePack type of workflow with good results, it would become utterly insane.

32GB Ram 4070ti 12GB VRAM

More on the process:
- Utilized this workflow for the VACE inpaint, deleting all the upscale and frame interp nodes in favor of doing it manually via Topaz.
- Cut the source video as cleanly as I could into ~2s chunks (within 50 frames, as gen time skyrockets on my system beyond that)
- Upscale/interp through Topaz to match source-quality (1080p30) then stitch output into editing software separately
- Alpha layer original face back on and background as best as possible to do less fighting with all the AI/upscaling artifacts (It's data we preserve!)
- Upscale/interp to 4k60fps
- Sample video was manually created, not through the workflow!