#digitalart
Did Midjourney finally solve the character consistency problem? In this video, we dive deep into the brand new Omni Reference feature—Midjourney’s replacement for the old --cref parameter. Whether you’re aiming for photo-realistic characters, stylized art, or even object and pet consistency, this feature might just be a breakthrough.
🔍 What you’ll learn in this video:
- How Omni Reference works with character faces, clothing, and stylization
- Why increasing Omni Weight to 400+ boosts clothing accuracy
- When to keep default values for better object consistency
- How Omni Reference performs with non-human creatures (say hi to Lonnie 🐶)
- Testing it on products, vehicles, and mood boards
- Using it alongside Style Reference and new experimental parameter --xexp
🎨 From cinematic Victorian detectives to ancient Japanese aesthetics, I explore the limits and possibilities of Omni Reference with hands-on tests and comparisons. Plus, I share pro tips on how to get the most consistent results across different use cases.
👉 Whether you're a digital artist, creative director, or AI enthusiast—this feature could redefine how you work with characters and styles in Midjourney.
📌 Key Takeaways:
- Use close-up images for best face consistency
- Increase Omni Weight to 400+ for clothing
- Keep Omni Weight around 100 for objects
- Mention unusual details (like “10-wheel Mustang”) in your prompt
- Combine with --style, mood boards, and --xexp for illustrative or cinematic effects
👍 If this helped you, don’t forget to like, comment, and subscribe for more Midjourney tips, experiments, and storytelling techniques.
#midjourney #midjourneyv7 #omnisystem #omnireference #aiart #characterconsistency #midjourneytutorial #digitalart #photorealism #creativitytools
CHAPTERS:
0:00 INTRO
0:28 Consistent Characters
1:57 Different Camera Angles
3:18 Consistent Clothing
4:36 Consistent Objects
7:46 Consistent Pet Photos
8:43 Stylization, Personalization, Moodboard, Style Ref
A study in continuity. Wanted to try Veo3's last frame functionality again. Surprising how well you can keep scenes consistent if you're careful. Prompt adherence was better than I remembered too. Midjourney for the first frame, the rest was Veo 3.
Suno Song: https://suno.com/song/f7177629....-6587-4182-be9a-af57
Original Recording: https://suno.com/song/30e47832....-e07e-4807-8b8d-10f7
Same process as before, started with an original recording, and used Suno to produce it up. You can listen to both above.
I keep up with all the latest tools. Like and Subscribe for more. Thanks for watching!
#ai #film #veo3 #google #midjourney #generativeai #aiart #aivideo #lofi #aiartgenerator #aivideogenerator #googleai #midjourneyart #lofimusic #artificialintelligence #aicreation #digitalart #futuretech #texttovideo #imagetovideo #aiinnovation #contentcreation #aiarttrends #aiexploration #nextgenai #midjourney
A bunch more experiments with Midjourney and Veo 3. I used Suno 4.5 for the music. Upscaled in Topaz. I continue to be impressed by the quality. You really have to take what it gives you, a lot of the generations are terrible. But sometimes it gives you something pretty great.
Suno Song: https://suno.com/song/6af916b7....-afb4-49cf-adba-dbd1
I keep up with all the latest tools. Like and Subscribe for more. Thanks for watching.
#ai #film #veo3 #google #midjourney #generativeai #aiart #aivideo #techdemo #aiartgenerator #aivideogenerator #googleai #midjourneyart #newaitools #artificialintelligence #aicreation #digitalart #futuretech #texttovideo #imagetovideo #aiinnovation #contentcreation #aiarttrends #aiexploration #nextgenai #midjourney