Large language models today are not a neutral technological tool, but a cultural phenomenon in which worldview differences, historical contradictions, and reflexive paradigms are marked (“labelled”). Differences and contradictions are the norm of development, expressing the uniqueness of cultures, writes Ekaterina Tikhomirova for the 22nd Annual meeting of the Valdai Discussion Club.
Each culture has its own “worldview”, realised in its language. In LLMs, large language models, this worldview is not only presented in natural language for users but also articulated in programming languages.
Therefore, a discussion of AI technologies (primarily LLM) is not only a conversation about states competing for resources and energy capacity for data centres. It is a discussion of the risks of hidden cultural expansion, where algorithms become conduits for alien paradigms.
LLMs are carriers of civilisational projects, and stronger their impact, the more authoritative and objective the algorithmic responses (and, of course, the rating of the development team) seem.
A paradox has already become apparent: a technology designed to unite humanity within a single information space (“on the path to unstoppable progress”) actually reproduces and amplifies civilisational rifts. Moreover, in the context of increasing platform concentration, the pressing question of cultural sovereignty and the right of nations to their own interpretation of reality in LLMs arises. AI Aliment is not about equalising and levelling cultural differences, but rather about articulating uniqueness.
Conflicts won't “arise”. They already exist
Today, we are dealing not with cultural differences in LLMs, but with conflicts already embedded within them. Large language models were created using bodies of text and images where contradictory values clashed: attitudes toward family and love, goodness and justice, beauty and ugliness. These blocks of meaning, when trained by models, turned into an "invisible map" of conflicts, which now manifests itself in the responses of AI bots. AI is not neutral: it returns results imprinted with the culture in which it was created and labelled.
This is not due to a deliberate choice by developers, but to the very nature of training. Models absorb two levels of labelling: technical labelling, when data is labelled for machine processing (so that the algorithm can correctly interpret the data), and semantic labelling, when data is labelled by experts from a specific cultural environment (who set the boundaries of what is acceptable and appropriate). As a result, what is considered normal or valuable in one civilisation is perceived as a distortion in another. This is precisely the starting point, not for hidden ideological contradictions, but for open conflicts.
There are plenty of examples. Western image-generating models or multimodal ones, when creating pictures of Russian scenes, produce stereotypes of characters “in ice and suffering”—emaciated, grey, and with sad facial expressions. Although Russian models are called “ours”, in reality, their development started with pre-trained Western cores. Therefore, they, too, produce “distorted” images .
Chinese models have the same problem: even in a European narrative, with a high-quality prompt, they “mix in” their own motifs – such as the characters’ clothing patters and characteristic architectural elements. We are not seeing neutral generation, but the transfer of cultural codes from one space to another. The question arises: does Aliment – a special training method for Constitutional AI – exist only in the imagination of AI-DEV – global development teams – or is this being done deliberately?
The average user doesn't notice the ideological contradictions in the subtle details of the design: they receive a ready-made image and consider it “natural.” They're more likely to be irritated by six fingers or three feet. But experts see that these aren't just differences, but conflicts of worldviews built into the algorithms. LLMs are becoming a testing ground for the clash of cultural paradigms.
Platform power. Softest power
If in the 20th century the competition was for oil and gas, then in the 21st century it's for computing power and access to human consciousness. Data centres are built not for the sake of "noisy server ventilation," but to expand the presence of AI in everyday life. The more users interact with algorithms daily, the greater their control over perception, emotions, and identity.
On the one hand, there are functioning AI systems (and other AI technologies) which are quite useful for managing transportation, urban utility networks, medical analytics, processing data from particle colliders (particle accelerators), and deciphering ancient manuscripts. Their work is based on precise data, and scientists and engineers use them brilliantly and without controversy in research and professional fields. Society ignores the work of such AI, although they have been with us for several years. Professional AI demonstrates the universality of precise scientific knowledge, while LLM is culturally laden.
“Soft power”—the soft force of cultural influence—is acquiring hyper-significance with AI, becoming the “softest power.” This is a targeted process not simply of exporting culture, but of the subtle manipulation of users' attention, habits, and even emotions through AI bots. Algorithms suggest what to read, what to watch, and how to interpret—as long as the prompt isn't strictly formulated.
The advice of algorithms is platform power: a few global centres concentrate access to data, infrastructure, and the very ability to shape the image of the Other (other cultures, other people, other values; everything is the same, convenient, and mine).
Centralised platforms are the architects of cultural reality (telecoms, hardware and software manufacturers, and AI platforms—Anthropic, Open AI, Google, Meta, X AI, Deepseek, Qwen, Sber, Yandex). They define "windows of meaning," from which the user, without critical reflection, sees only a subset of possible interpretations. Algorithms rank and filter based on development priorities, thereby shaping the horizon of perception.
When local AI models begin to lag behind global giants, they are forced to use foreign pre-built cores, ready-made datasets, or hired foreign specialists to perform not only technical but also semantic labelling. As a result, cultural identities are threatened: national narratives are supplanted by standardised frameworks. The consequences are dangerous: the diversity of meanings diminishes, and cultures lose their unique intonations. If this process is not balanced by distinctive models, we will wake up tomorrow to a world where different civilisations cease to recognise themselves.
One of the risks is the loss of the cultural Other by the subject—in dialogues with AI bots, users receive an echo—a reflection of their own needs. The possibility of intercultural, live communication disappears.
Therefore:
The future lies in the formation of an ecosystem of diverse national AI systems that reflect the richness of human cultures. A dialogue among civilisations is possible with the mutual recognition of each culture's right to its own algorithmic interpretation of reality. The digital and AI world must be multipolar: users must be able to choose between different "digital worldviews," compare approaches, and formulate their own position based on diversity.
We are entering an era of techno-humanism, where technology is not opposed to humans, but rather evolves alongside them. Evolution continues in the form of coevolution, and its result will be a new type of subject—Homo prudens, a wise and responsible human being capable of combining the power of algorithms with universal morality and cultural identity.