Adobe Firefly, a generative image tool that does not rely on publicly-scrapped images, has already gained popularity among Photoshop users. Now, with the introduction of Firefly Image Model 3, the tool’s integration deepens, and its capabilities expand significantly.
One of the key enhancements in Image Model 3 is the ability to use reference images, set a content type, and add effects before generation begins. This provides finer control over the generative process and allows users to create more specific and refined images.
Another notable improvement is Firefly’s enhanced understanding of longer prompts and its improved style engine. This results in more natural-looking images that are not limited to aesthetically pleasing defaults. Firefly can now match styles, image structures, and generate countless variations based on user input.
Firefly Image Model 3 will initially be available in Adobe Photoshop desktop beta and the Firefly web app beta. It is expected to be generally available in Photoshop and Firefly later this year.
Firefly’s generative capabilities have proven useful in various scenarios, such as extending table tops to fill out aspect ratios and adding objects to images. With the advancements in Image Model 3, users can now fill blank canvases with professional-level imagery, swap out backgrounds, replace objects, and iterate on initial image ideas with ease.
The new reference image skills are particularly powerful, allowing users to drive iterations with specific references. Firefly ensures natural-looking blends and lighting adjustments when incorporating these references.
Overall, the introduction of Firefly Image Model 3 represents a significant step forward for Adobe’s generative AI technology. It empowers users with greater control, versatility, and the ability to create more refined and imaginative images.