Google outdoes itself with new AI: Imagen can specify the generation of objects, and the style can be converted at will
Therefore, the idea of fine-tuning is adopted here, and the whole is still based on the [object category] features already learned by AI, and then modifying it with the special features learned by [V].
To generate a white dog for example, here the model will learn personalized details such as the dog's color (white), body shape, etc. by [V], and together with the common features of the dog learned by the model in the larger category [dog], it will generate more reasonable yet personalized pictures of a white dog.
To train this fine-tuned text-image diffusion model, the researchers first generate low-resolution images based on a given text description, when the image of the dog in the generated image is randomized.
DreamBooth's research team is from Google, and the first author is Nataniel Ruiz.
Nataniel Ruiz is a fourth-year PhD student in the Image and Video Computing Group at Boston University and currently interns at Google. His main research interests are in generative models, image translation, adversarial attacks, facial analysis and simulation.
OTHER NEWS
-
- Meta AI bot trolls CEO Zuckerberg: so rich and always wearing the same clothes
- By 9 Aug,2022
-
- Accused of losing its technological advantage, Samsung will actively improve the competitiveness of the chip business
- By 2 Aug,2022
-
- Apple Apple Watch Ultra disassembled, extremely difficult to recover
- By 27 Sep,2022
-
- Thunderbird launches iPhone version of the Riders V2 gamepad with two programmable buttons
- By 5 Sep,2022
-
- Bill Gates almost called off the Xbox project because the prototype was running a non-Windows system
- By 3 Aug,2022
-
- Valuation of the company behind Stable Diffusion climbs to $6.9 billion, just one month after the project's launch
- By 13 Sep,2022