Global Alternatives 2024
AI and the Regulation Question

If machines continue to develop awareness, conscience, perception, feelings, the ability to act independently and make ethical decisions and other human characteristics that give them a certain humanoid identity, the question arises how societies and constitutional systems will treat them. Will it get some kind of legal status?

During the last annual meeting of the Valdai Club, a remarkable amount of time was devoted to addressing the problem of Artificial Intelligence. Apart from the excellent panel during the expert session, the beginning of the dialogue between Russian President Vladimir Putin and the experts was marked by the consideration of issues in this area. It was emphasised that we can already see how the basic Western AI systems are loading or embedding their own ideological, political and geopolitical positions into their algorithms, and that every civilization or larger state that has the ability to do so will have to create its own AI system and strive to control it sovereignly and compel its citizens to use it.

 Therefore, it is obvious that the next step in this field is the issue of regulation, which has been widely debated. The first important steps have been taken, such as EU Regulation 2024/1689 of June 13, 2024, which establishes harmonised rules on artificial intelligence. This EU Act on artificial intelligence - EUVIA is the first supranational legally binding act of its kind.

However, it was created as a reaction to ethical and legal problems which have been the subject of important discussions for almost ten years, so here we would try to present some of the most important issues which the regulation will have to deal with both at the level of individual states and at the level of international acts. Since I have been involved in a project in which we examine the ethical and legal aspects of AI for a year now, I believe that our insights could be useful to our colleagues.

We should start with the analysis of UNESCO, which has singled out four basic fields.

1. Bias AI. AI reproduces biases and stereotypes from the world it inherited and from which it draws data, relationships and modes of representation. Therefore, part of the task in this area is not only to enable the bot to give the requested answers as faithfully as possible to humans, but also to teach it to ethically standardise the world it builds and avoid discrimination and treating certain ethnic, racial and other groups as problematic. It is already a well-known example that, as a rule, black and dark-skinned people, Muslims, etc. appear most often in searches for “riots in the suburbs” or “crime”.

2. The next important and potentially controversial field is the use of AI in the courts. There is the idea that it could dispense justice faster, better and more efficiently than judges, living people with their inherited prejudices, habits and biases. The idea that it can help the prosecutors in the preparation of the case, as well as the legislators in parliament, is also mentioned. In any case, we speak about what’s been called the "automatisation of justice": the creation of special software for this type of activity. The following challenges arise here: a) lack of transparency, because the system is not visible to humans, b) AI is not neutral but subject to inaccuracies, discriminatory outcomes, built-in or embedded biases, c) a huge problem with monitoring, compromising privacy and collecting data about people who are brought before the court, g) worries about fairness and human rights, the proportionality of the punishment in relation to the crime committed, misunderstanding of the context, mitigating circumstances, etc. There are ideas that, with the help of AI, one could calculate how much punishment to assign to an offender or whether to release him early from serving his sentence based on assessments of the category he belongs to, the likelihood of repeating the sentence, etc.

3. Enormous problems have arisen in the creativity and creative industries. For example, when it comes to the role of AI in the creation of art and works of art. There is an already known example with Rembrandt. AI created a completely new image based on the deep learning and study of Rembrandt's works. The basic question that everyone is interested in is who is now the author of this new work: the company, the engineers or Rembrandt himself? Would Rembrandt himself or his successors agree that AI creates in his spirit and thus neglects inimitability as the basis of the artistic creation of genius? (Another related example is the work of Huawei bots on completing Schubert's Eighth, the unfinished symphony).

All this is problematic, and raises huge questions about the future of art and creativity. For example, protecting rights, preventing others from using your work, or finding a way to get paid for it. But above all, the problems of the integrity of creativity are mentioned and how to protect and preserve human creativity, originality and genius, from industrial learning and machine mimicry. Also, how does one distinguish piracy and plagiarism from originality and creativity? It is very difficult to define the limits of plagiarism, even in the world before AI, and only with its development will there be an increasing problem and struggle in the grey zone.

4. Autonomous vehicles. The use and development of autonomous machines entails enormous problems and dilemmas both in civilian use, and perhaps even more so in the military, where it is already widely developed and used regardless of our displeasure and fears. Even though such vehicles will contribute to the mobility of elderly, sick or blind people, questions inevitably arise, and new types of problems are on the way. For example, for the development of an autonomous car or bus that will be used in urban transport, a lot of data must be collected and processed, which the car software collects from all sides and then processes. The methods of collection are very controversial, and there is always the possibility of errors and mistakes that in this field can have fatal consequences for many people involved in traffic, not only for those who would be in the vehicle. The AI will also have to make decisions to e.g. choose whom to hit at the crosswalk if the brakes are broken. There is already a well-known example of a trolley bus with broken brakes hurdling towards people at a pedestrian crossing which has to make an ethical decision whether to continue in that direction and kill five people who are on the crossing, or turn towards the bench on the side where one person sits.

Hope for an AI Revolution
Leonid Grigoryev
The accumulation of discoveries and innovations in history takes time, but their rapid development and widespread implementation - sometimes reaching revolutionary proportions - usually begins during and after crises (and during conflicts). At the moment, the world has emerged from the Covid-19 economic crisis, but it is in a state of geopolitical overstrain. This is the best time to find technological solutions to business-related, national and global problems, as well as accelerate economic growth, writes Leonid Grigoryev for the 21st Annual meeting of the Valdai Discussion Club.
Opinions


All of this gets further complicated when we consider the use of AI to develop robots and other machines for military use. Military circles have for long been working on mechanisms for the so-called enhancement (that is, improvement of healthy people) of soldiers in order to suppress stress and make it impossible to commit mass crimes as a result, but also to increase endurance and capacity for action. The use of machines that do not suffer from stress and work under pressure seems to be the ideal solution. Therefore, autonomous drones, vehicles and robots that will make their own decisions about selecting targets and eliminating them are being widely developed.

When it comes to data, it is of course the most important source for the development of AI, which entails problems with privacy, the misuse of data, illegal and illegitimate means of collection and processing, controversial data brokers, etc. We have the example of the company GlaxoSmithKline, which bought the exclusive rights to research the genetic data of clients of the DNA testing company 23andme for the purposes of drug development. The biggest Big Tech corporations are in the best position to collect that data, which Arvind Gupta says is the new oil, and there is a legitimate fear of their oligopoly in the AI area. Therefore, other countries will first have to regulate the method of data collection, but also to help their own companies use the data for the development of domestic AI systems.

Among the more important problems are, above all, the question of controlling the general development of machines, i.e. the question of whether and when they will, on the one hand, reach a ‘singularity’ where people will no longer be able to follow them, and on the other hand, take jobs from people, which is already threatening numerous professions in the design industry, publishing etc. The issues of using AI to create facial recognition software, or its development in the market of prostitution dolls, are also extremely interesting, since the first brothels of this type opened six years ago. On the other hand, if machines continue to develop awareness, conscience, perception, feelings, the ability to act independently and make ethical decisions and other human characteristics that give them a certain humanoid identity, the question arises how societies and constitutional systems will treat them. Will it get some kind of legal status? Will they have rights? Will we treat them like animals that have the same level of intelligence? Will machines be able to suffer and will we then have to guarantee them the right to protection from suffering, injury or abuse? Finally, how will we determine the lifespan of AI bots and machines and decide when it's time to eliminate them? Will we have human-machine marriages with AI?

Because of all this, after a series of attempts to pass acts on self-regulation among actors in this field, states and international organisations are also involved. In the USA, the first court rulings were passed which prohibited the use of AI in certain areas (social policy, for example) as dangerous and inadequate because it violates the rights of citizens. In a number of countries, including Serbia, where we come from, the first judgments were passed that prohibit the display and distribution of deepfakes, which can falsify speech, behaviour, films and images (eg. creation of a porn film with the singer Taylor Swift). Finally, the EU passed the first comprehensive act, which we will present in some of the following texts.

The Role of Information in the Era of Global Shifts
Arvind Gupta, Aakash Guglani
In today’s fast-changing world, the information people consume daily—through social media, news portals, search engines, and e-commerce platforms—increasingly influences every aspect of their lives. Whether shaping their voting behavior, deciding where to shop, or determining what to search for and how, information is a pivotal force. Over the past 20-30 years, companies that hold our data have also amassed substantial market power, reshaping the global information order.
Opinions

 

Views expressed are of individual Members and Contributors, rather than the Club's, unless explicitly stated otherwise.