Google reduces AI content coverage: truth, fears, and facts

25 April 13:55

Artificial intelligence is rapidly changing the rules of the game in the communications sector. At the panel “AI in Communications. Business and PR. What are the benefits and hidden threats?” panel. Is AI content really getting less coverage? Or is it just the fear of the new? Dmytro Samoliuk, Head of Digital Media Group at We Are Ukraine, shared his alarming observations on the underreach of AI-generated content. And Oleksandr Krakovetskyi, researcher and author of a bestselling book on generative AI, assured that the main thing is the value of content, not its source. What is going on behind the scenes of Google and YouTube algorithms – in the "Komersant Ukrainian" article

Dmytro Samolyuk, Head of Digital Media Group at We Are Ukraine, shared a number of insights into the interaction of AI-based content with Google and YouTube algorithms.

“We started to pick up on the story that both YouTube and Google underestimate your content if it contains artificial intelligence. Google does not accept websites created exclusively by AI without human participation. YouTube also ‘jams’ videos generated by neural networks, reducing their reach,” Dmytro Samolyuk

Samolyuk emphasizes that automated content without human intervention is unviable in the eyes of Google’s algorithms. The same logic applies on YouTube: algorithms reduce the reach even when it is indicated that the video was created with the help of AI:

“YouTube managers say that the ‘created with AI’ checkbox doesn’t affect anything. But in practice, we see that it does. If the context is present, the video is underestimated in its reach,” Dmytro Samoliuk

We are already on the verge of the AI-first era

At the same time, Oleksandr Krakovetskyi, CEO of DevRain, CTO of DonorUA, AI enthusiast, author of ChatGPT, DALL-E, Midjourney: How Generative Artificial Intelligence Changes the World, shared adifferent opinion . He denied the widespread fear of Google’s “pessimization” of AI-generated content:

“Google officially states that they do not pessimize AI-generated content. You cannot promote your own generative tools (Gemini, NotebookML, Duet AI) with one hand and pessimize the result of their work with the other. If an article is of high quality, it doesn’t matter whether it was written by a human or an algorithm. The reason is quite trivial: algorithms generate texts in natural language, and naturalness means that it is impossible to distinguish the way they were created. Of course, provided that the person who generated the text has learned to use the tools correctly. But even this period will pass quickly, as each subsequent model improves the ability to create high-quality texts even based on low-quality queries.” – Oleksandr Krakovetskyi

The AI enthusiast says that artificial intelligence is not just helping today, it is changing the very structure of professions and media.

“Today we live in the AI-assisted era, but tomorrow it will be AI-first. This will mean that the role of humans in many processes will change, until they are completely eliminated. Will this mean mass layoffs and the collapse of the profession? Of course not. People will have many other tasks and functions,” Krakovetskyi emphasized.

According to him, agents who will perform tasks better than humans by the end of the year is not futurism, but an approaching reality. He recalled the example of the Japanese startup Sakana, where AI independently created a scientific paper that was successfully peer-reviewed.

Krakovetsky also noted that new models of artificial intelligence demonstrate rapid progress in many criteria:

“The IQ of some flagship large language models already exceeds 130, which is a very high score. On one of the math benchmarks, the accuracy increased from 13% to 83% in a few months. Scientists and AI researchers do not always have time to invent new ways to evaluate progress,” Oleksandr Krakovetskyi

In conclusion, Krakovetsky emphasized that in the context of information warfare, it is extremely important to create own, clean data sets for model training to resist information “poisoning”:

“Any organization of the future is an IT organization. And the task of this organization is to create an infrastructure for AI. We shouldn’t be afraid of this, we should be ready for changes,” Oleksandr Krakovetskyi

The problem of source quality and “information noise”

Another deeper threat to AI in journalism, according to Samolyuk, is bad data on which models are trained. If the quality of the input information is low, the result will be the same.

“What will AI generate if all journalists stop writing news? If there is no input, there will be no output. It is very important what these models learn from,” emphasized Dmytro Samolyuk.

He mentioned cases when AI learns from manipulative content:

“We had a case where a well-known person in Russia bought a bunch of publications in Ukrainian and Western media, and then made fake materials based on this,” said Dmytro Samoliuk.

Despite the risks, Samolyuk does not consider AI an enemy of journalism or PR. On the contrary, it is a tool for generating ideas, not a complete replacement:

“Ask AI to generate 5 ideas, and maybe none of them will work for you, but one of them will lead to something of its own. This is an inspiration generator,” Dmytro Samolyuk said.

He also drew attention to the rapid improvement of generative models. A year ago, AI was not capable of satire. Now it offers it: “If you want, I can make a headline in the style of a clickbait or in the style of a particular publication.” This is already a full-fledged creative partner, says the Head of Digital at We Are Ukraine Media Group.

So, AI is a powerful tool, but content without human involvement loses credibility with both users and Google and YouTube. Experts advise not to rely entirely on neural networks, but to combine human expertise with technological support.

Anastasiia Fedor
Автор

Parner news