These images are stunningly realistic… But they were created by Google’s AI thanks to a simple description

News hardware These images are stunningly realistic… But they were created by Google’s AI thanks to a simple description

The Google Brain laboratory, specializing in deep learning, has just presented its latest advances in artificial intelligence: on the program, the creation of realistic images from short texts. The result is disturbing, but also potentially dangerous.

Summary

  • Results more convincing than those of the competition?
  • An AI that is not intended for the general public
  • Dangerous drifts to avoid

“Unprecedented photorealism combined with a deep understanding of language” : this is how the team of Google Brain summarizes Imagen, its latest creation. This is’an artificial intelligence that aims to create photorealistic images based on short textual descriptions. The principle is therefore particularly simple: engineers write a sentence, for example “a cute corgi lives in a house made of sushi”, and give it to Imagen, which is responsible for composing a realistic visual rendering. The result does not lack spice.

This image of corgi, and all those featured on the Imagen page, therefore came out of the imagination, so to speak, of Google’s artificial intelligence. The latter comes to walk on the flowerbeds of other AI of the same type, such as DALL-E, developed by OpenAI.

These images are stunningly realistic... But they were created by Google's AI thanks to a simple description

Results more convincing than those of the competition?

Google Brain researchers argue that the results obtained by Imagen tend to convince observers more than those of other similar AIs. Statements that are based on a test assembled from scratch by the same scientists, called DrawBench: it brings together 200 test sentences which were provided to three other AIs in addition to Imagien: VQ-GAN, LDM and DALL-E 2. The algorithms generated their own renderings, which were then presented to very human people, commissioned to judge the fidelity image versus text. Imagen wins every time.

These images are stunningly realistic... But they were created by Google's AI thanks to a simple description

Obviously, this study should be taken with a grain of salt since we imagine it to be very largely oriented by Google researchers. However, the company’s laboratory plays the card of transparency as much as possible, by making public the list of the 200 texts of DrawBenchso that everyone can get an idea about it.

An AI that is not intended for the general public

The examples highlighted by Google Brain are impressive. But there again, we can legitimately think that only the most successful results have been selected to present Imagen. It is indeed possible to experiment with AI in some way on the project page, but the choices remain extremely limited and they are not the result of real-time action carried out by artificial intelligence.

Unfortunately, Google researchers do not intend to offer Imagen to the general public, at least not as it is. The reason given is mainly ethics : with artificial intelligence presumably able to provide photorealistic rendering for just about anything, Google scientists fear a use that could have “a complex impact on society”.

These images are stunningly realistic... But they were created by Google's AI thanks to a simple description

Dangerous drifts to avoid

“Potential risks of misuse raise concerns about the open source aspect of code and demos”can we read on the project website. “At this time, we have decided not to release any code or demo to the public. In the future, we will look for a way to outsource this solution responsibly, to balance the balance between the value of a public test and the risks of unrestricted use. »

One of the next goals of Google researchers is “remove noise and unwanted content” that Imogen is likely to be able to use in her stagings. “In particular, we used the LAION-400M dataset which is known to contain a wide range of inappropriate content, including pornographic images, racial slurs and harmful social stereotypes”say the scientists. “As such, there is a risk that Imagen has encoded harmful stereotypes and portrayals, which guides our decision not to release Imagen for public use. »

It is easy to imagine the disaster that could result from the misuse of such a tool., and it’s no surprise that Google doesn’t want to take any risks. It remains to be seen whether such a powerful artificial intelligence can one day be put to good use.

We want to say thanks to the author of this short article for this awesome material

These images are stunningly realistic… But they were created by Google’s AI thanks to a simple description


We have our social media profiles here and additional related pages here.https://yaroos.com/related-pages/