Listen

Description

Over the past few months, the popularity of generative AI has increased significantly. Earlier in the year, Avram discussed the dangers of generative AI, but what he saw then was only the beginning of the problem. As it turns out, with the implementation of conversational chatbots in Bing with Sydney and Google with BARD, the future of the Free Web is in danger.

What is Generative AI?

Generative AI is a type of artificial intelligence that uses machine learning algorithms to generate new, unique outputs from existing data. A generative AI system can create anything from images and videos to text and audio, based on the input it receives. For example, researchers have used generative AI systems to produce realistic-looking faces and vehicles, or music and stories.

The newest type of generative AI is conversational chatbots. These are systems that can interact with humans in natural language, and generate responses based on the conversation. They have become increasingly popular tools for consumer-facing services and applications, particularly in customer service scenarios.

The problems with Conversational bots in Search

The main concern with generative AI is its potential to disrupt the way web content is created. Since it can generate unique, realistic-looking outputs quickly and cheaply, generative AI could be used to create counterfeit products or fake news that are difficult to distinguish from the real thing. Additionally, if a single source of data is used for training, the generated content may be biased and inaccurate.

Furthermore, generative AI systems can generate outputs that violate copyright laws or contain offensive content. These systems are less an AI system and more a large dataset version of the word suggestions above the keyboard on your smartphone. On your phone, the keyboard uses your behavior to determine what the next most likely words are. With generative AI systems, they use a large dataset to do the same thing, from various authors, publishers, and more.

Plagairism in action on Google

Often times, and especially when a topic is niche, the dataset is small and therefore the likely next word is incredibly predictable. One of the best examples of this comes from Avram Piltch himself. He has become one of the loudest voices on the topic and has regularly tested the conversational systems. While testing Google BARD, he encountered an interesting issue - Google plagiarized one of Tom's Hardware's own articles. When Avram called BARD out for it, it agreed and apologized.

However, after Avram took screenshots of the interaction and wrote an article about the experience, he re-engaged BARD. He asked the system about the experience and was told that the author of the article (himself) had falsified the screenshots and lied about the experience in order to damage the reputation of Google BARD. A bit of an overreaction to something that it had admitted to just a day or two earlier.

But, the problem is that BARD has to respond this harshly because the reputation is already damaged. The initial release of the system cause a huge value drop on Google's stock because of errors in its answers. The real problem, however, is the actual value of the system for users and publishers.