Hey everyone!

I’m really new to ComfyUI and I’m trying to recreate a workflow originally developed by the folks at Maison Meta (image attached). The process goes from a 2D sketch to photorealistic product shots then to upscaled renders and then generates photos wearing the item in realistic scenes.

It’s an interesting concept, and I’d love to hear how you would approach building this pipeline in ComfyUI (I’m working on a 16GB GPU, so optimization tips are welcome too).

Some specific questions I have:

For the sketch-to-product render, would you use ControlNet (Canny? Scribble?) + SDXL or something else? What’s the best way to ensure the details and materials (like leather texture and embroidery) come through clearly? How would you handle the final editorial image? Would you use IPAdapter? Inpainting? OpenPose for the model pose? Any thoughts on upscaling choices or memory-efficient workflows? Best models to use in the process.

Thanks


💬 Discussion r/StableDiffusion (2 points, 1 commentaires) 🔗 Source