Contents
0%Hey!
Seedance 2.0, the latest frontier AI video model, just dropped. And its biggest innovation is Omni-Reference mode. Instead of giving the model a single image and hoping for the best, you can now feed it up to 9 images, 3 videos, and 3 audio files as explicit context inputs. The model uses all of that to understand exactly what you want and generate a better video because of it.
See the Difference
I gave Seedance two product images of this supplement brand and let it freestyle. Then I ran the same thing in Sora 2 Pro.
Gruns supplement bottle
Gruns gummies close-up
Gruns supplement bottle
Gruns gummies close-upThe scene creativity is wild. It opens with the raw product shot, then cuts to B-roll of a woman at the office, then doing yoga. It genuinely feels like a real UGC video.
The second input image is key. I gave it a photo of what the actual Gruns gummy bears look like, and in the video, the gummies show up as real green bears. That's something a model would almost never figure out on its own.
That's Omni-Reference mode at work. You give it context, and it actually uses it.
Now here's the same product in Sora 2 Pro.
Gruns supplement bottle
Gruns supplement bottleThe main issue is the aspect ratio, which comes from the reference image. This is a known limitation when using Sora 2's Start Frame. When you give a product image to Sora 2, you're forcing the model to use it exactly as-is, even though that's not what it's designed for.
The video itself also feels too polished. More like a commercial than authentic UGC.
Seedance 2.0 vs Sora 2: The Breakdown
Here's a quick comparison of what each model gives you for UGC. The short version: way more control and longer generations.
| Feature | Seedance 2.0 | Sora 2 Pro |
|---|---|---|
| Image inputs | Up to 9 images | 1 (start frame only) |
| Video inputs | Up to 3 clips | None |
| Audio inputs | Up to 3 files | None |
| Max generation | 15 seconds | 12 seconds (lose 1-2s to start frame) |
| Edit & extend | Yes | No |
| Lipsync & voice | Built-in, best in class | Not available |
| Sound effects & music | Built-in | Not available |
Sora 2 gives you one input image as a first frame. That's it. Seedance 2.0's Omni-Reference mode lets you feed in images, videos, and audio all at once as explicit context. The model uses these inputs to understand your product, your environment, and your creative direction. You're directing the scene instead of hoping the model guesses right.
Freestyle UGC
In freestyle mode, you give Seedance minimal direction and let it improvise. Just product images in, full UGC video out. Here's how that stacks up against Sora 2.
Gym Supplement
Single product image. No detailed prompt. Seedance improvised the entire gym workout scene.
Gym supplement product
Gym supplement productSeedance doesn't just slap the product into a random context. It understands what the product is and builds a story around it. The gym setting, the workout, the way the actor interacts with the product all make sense.
Gym supplement product
Gym supplement productSame aspect ratio problem. And the video leans commercial instead of authentic UGC. This is a consistent pattern with Sora 2 when you feed it product images.
Controlled UGC
This is where things get really interesting. Instead of letting the model improvise, you give it specific directions on the location, action, and how you want the video to play out.
Supplements (PINC)
I gave it specific instructions on the location and the exact action I wanted the actor to perform.
PINC supplement
PINC supplementAmazing prompt adherence. It followed every detail I gave it. High realism, lipsyncing on point, and the voice sounds natural.
PINC supplement
PINC supplementSora 2's version is still realistic but gives off a studio vibe instead of authentic UGC. The product sizing is also a bit off. You can tell it's struggling to balance the product reference with the scene it's trying to build.
Beauty / Makeup (Lipstick)
I gave the model the actual lipstick product image and wrote a very detailed prompt about it being a peel-off lipstick.
Lipstick product
Lipstick productThe fact that it understood "peel-off lipstick" and nailed the application AND the peel is insane. This is where Seedance's prompt comprehension really shines.
Lipstick product
Lipstick productSora 2 was okay on the application part. But it completely failed on the peel-off. And because the max generation is 12 seconds with the first second lost to the start frame, every Sora 2 video ends up feeling rushed and incomplete.
Beauty / Skincare
Skincare product
Skincare productSkincare is a great test because the product interaction has to look natural. Nobody wants to watch someone awkwardly apply a cream. Seedance nails the hand movements and the way she holds the product.
Skincare product
Skincare productSora 2's version falls into the same pattern. Decent realism, but the product interaction feels stiff and the scene ends before it has a chance to develop.
Bonus: Green Screen UGC
Seedance can also generate green screen style content, which is incredibly useful for post-production. You can key out the background and composite your own.
The talent, the lighting, the green screen spill. It all looks like it was shot in an actual studio. This opens up a whole new workflow where you generate the talent and composite them into your own branded backgrounds.
Tips & Tricks
After weeks of testing Seedance 2.0 for UGC, here are the three biggest things that will save you time and credits.
Be Specific with Omni-Reference
Seedance improvises really well on its own. You saw that with the freestyle examples above.
But if you want a specific result, take the time to give it detailed references. The difference is massive. Look at what happens when you give it 6 input images for an unboxing video versus just one.
Skims jacket
Outfit reference 1
Outfit reference 2
Outfit reference 3
Outfit reference 4
Outfit reference 5
Skims jacket
Outfit reference 1
Outfit reference 2
Outfit reference 3
Outfit reference 4
Outfit reference 5The more context you give through Omni-Reference, the less you're leaving up to chance. You're not hoping it generates something good. You're telling it exactly what to create.
Use Blurry Images for Character Variety
You'll notice pretty quickly that Seedance tends to generate characters that look similar. And you can't give it a close-up photo of a real person because that usually gets blocked.
But there's a middle ground. If you give it an image of a person that's blurry or unclear, just enough to suggest the features, it creates a much more unique character consistently.
Blurry person reference
Product reference
Blurry person reference
Product referenceIf you're making multiple UGC videos and need the actors to look different from each other, this is the move.
Use Edit and Extend for Character Consistency
This is genuinely a game-changer and the biggest advantage over every other model right now.
Let's say you generated a UGC video and you're happy with it. Great character, great scene, but you need more footage. With any other model, you're done and you'd have to start over.
With Seedance 2.0, you take that video, feed it back in, and say "extend this and make the actor continue saying..." whatever you need.
Product reference
Product referenceIt keeps the same character, same environment, same voice. Even Veo 3's extension couldn't maintain voice consistency. Seedance nails both.
For UGC this means you can build longer videos from shorter clips, keep your character consistent across an entire campaign, and iterate on what's working instead of starting from scratch every single time.
Next Steps
You can access Seedance 2.0 through Jianying.com right now.
If you want to skip the learning curve, Starpop.AI is an AI content tool built specifically for ecommerce brands. All the testing I've been doing is being turned into ready-to-use templates on Starpop, so you can generate UGC like the examples above without figuring out the prompts and inputs yourself.
Starpop gives you ready-to-use AI video templates built for ecommerce. Product ads, UGC, try-on hauls, and more. Built from the same workflows and prompts used in this article. Try it at starpop.ai.
A lot of the examples in this post actually came from people in our Discord who sent me their product images. I ran them in Seedance 2.0 for free, and now they can use the videos however they want. Join the Discord (link below) if you want the same.
