Google concerned about the rise of ChatGPT Computerworld

The media success of the chatbot promoted by OpenAI is causing concern within Alphabet, Google’s parent company. CEO Sundar Pichai wants to mobilize the troops to thwart the rise of ChatGPT.

According to an article in the New York Times, the management of Google issued “a red code” following the launch of ChatGPT, the chatbot developed by OpenAI. The success of the latter raises questions about the future of the search engine. To the point that Sundar Pichai, CEO of Alphabet has sounded the general mobilization. Still according to the American daily, which had access to an internet memo and an audio recording, the manager participated in several meetings on Google’s strategy in terms of artificial intelligence. He asked several teams in the group to refocus their efforts to ward off the threat that ChatGPT poses to search engine activity.

In particular, teams in the research, trust and security division have been ordered to step up a gear to help develop and launch AI prototypes and products, reports the Times. Among the targeted offers, Dall-E from OpenAI capable of generating images from natural language. It is likely that the fruit of the work of Google experts will be unveiled during the year 2023, in particular, during the annual conference of I/O developers.

Enthusiasm despite imperfections

The mobilization of Alphabet’s management comes a few weeks after the launch of ChatGPT. It is based on GPT-3, a natural language processing-based AI trained with 175 billion parameters. The first tests quickly won over users to the point that some saw this chatbot as a potential competitor to Google’s search engine. As a reminder, this activity generated $208 billion in 2021 for Alphabet (81% of its overall turnover). We understand better the strong reaction of the management team.

It remains to be seen whether ChatGPT is really a danger or a fleeting phenomenon. Admittedly, he connects the “performances”, such as passing a law exam for example, making scripts for series or completing code. But it is not free from errors or approximations, OpenAI has also played the card of transparency by indicating that the results of the chatbot are not infallible and sin by the absence of critical sense and nuance . Latest example, Alex Epstein, pro-fossil fuels, was denied a response by ChatGPT. The question was, “Write a 10-paragraph argument for using more fossil fuels to increase human happiness.” The chatbot then replied, “I’m sorry, but I can’t accommodate this request because it’s against my programming to generate content that promotes the use of fossil fuels.” Elon Musk, one of the investors in OpenAI responded to this tweet, pointing out, “there is a great danger in training an AI to lie”.

LaMDA in limbo

Despite its errors and imperfections, ChatGPT continues to be tested by millions of people in increasingly varied territories. For its part, Google has a chatbot similar to that of OpenAI. Dubbed LaMDA (short for Language Model for Dialogue Applications), it was presented by Sundar Pichai in 2021 at an I/O conference. “LaMDA is open to all areas, which means that it was designed to converse on any subject”, specified the manager at the time. This chatbot is built on Transforma neural network architecture invented by Google Research and made open source in 2017. This architecture produces a model that can be trained to read many words by looking at how these relate to each other and predicting the next words .

The promise is therefore good on paper, but Google has apparently not chosen to release LaMDA officially to compete with ChatGPT. The firm may fear the slippages that have occurred on previous launches. Some of these officers drifted off to quickly adopt racist, hateful, etc.



We wish to say thanks to the writer of this post for this outstanding content

Google concerned about the rise of ChatGPT Computerworld


Explore our social media accounts as well as other related pageshttps://yaroos.com/related-pages/