Midjourney just rolled out Omni Reference in V7. This new feature lets you point to a reference image and tell Midjourney to put that specific character, object, or creature into your generated image.
For creatives needing practical, usable results with specific elements, this offers a major productivity boost beyond relying purely on text prompt randomness. It makes getting consistent characters or objects into your generations much more reliable.
You use the --oref parameter with an image link and adjust its influence with --ow (omni-weight), using your text prompt to guide the overall scene and any modifications. It works alongside other V7 features.
Been playing around with this and find it's still a bit rough around the edges. I like where it's going, but I'd say you have a failure rate of more than 50%. Many times the object that you place in it doesn't look like the object, particularly with people and faces (been experimenting with the omni-weight settings, and found that the default often works best.)
chatgpt's new image model is so dominant it's render everything else useless
Looks like a really practical tool for MidJourney users. The clean UI and reference management feature seem helpful. Nice work and congrats on the launch.
Sounds like a powerful too for precise control over elements in images. With v7, it seems to offer even more accuracy, allowing for highly detailed and customizable creations. A great addition for users looking to take their image generation to the next level!