It Specialist – On The Prospects And Risks In The Field Of Artificial Intelligence

Artificial intelligence cannot be given the right to make decisions related to human destinies, and legislative restrictions are required to control it. This opinion is shared by Stanislav Ashmanov, an entrepreneur, a specialist in deep neural networks and machine learning, a developer of systems for recognition and synthesis of images, voice and expert systems. In an interview with RT, he outlined the prospects and risks that accompany the rapid growth of artificial intelligence technologies. The specialist is sure that such developments bring significant benefits to society in manufacturing and medicine, but they can pose a serious danger in the military sphere and threaten an invasion of privacy.

– Recently, the European Commission published “Proposals for the regulation of artificial intelligence based on a European approach.” The document declares the need for a complete ban on the use of artificial intelligence systems in a number of cases. It is proposed to classify systems with unacceptably high risk as those that manipulate a person and affect his freedom of choice. What is artificial intelligence in the modern sense and what danger does Europe see in it?

– We, like most developers, use a non-romantic definition of artificial intelligence. For us, this is any automation of functions that were previously performed by a person using his intellectual capabilities, any replacement of human labor with a machine, with an algorithm.

In the case of the restrictions imposed by the Europeans, we are not talking about a restriction in development, but about the fact that it is not in any way possible to use mathematical algorithms that make decisions on their own, without the participation of specialists.

– A question of application without human control …

– Sure. We can draw an analogy. The advent of road traffic regulations did not limit the development of automobiles in any way. Rules have emerged, following which society can guarantee fewer road casualties, and technology is developing in a controlled manner. It’s the same here. Rules are being introduced, but this does not mean that less money will be invested in this area, that there will be fewer researchers. It’s just that society prepares in advance for the risks, possible shocks that can occur due to improper use.

Facebook
© Stanislav Ashmanov

One such example is digital rankings. Everyone is already accustomed to the fact that the bank puts down a credit rating that can influence the fate of a citizen. The rating may determine whether a person can improve, for example, their living conditions, buy an apartment or house. Now imagine the automatic decision-making systems that give a person a global social rating, not only solvency …

– This is already some kind of dystopian plot …

– Yes, all this was shown a long time ago by science fiction writers of the 20th century. There is a race in the field of automation and digitalization. No one wants to miss out on opportunities, but they must be mindful of potential shocks. For example, it is necessary to prepare for an increase in unemployment and introduce programs for retraining people in advance. Perhaps there will be some kind of certification agency for artificial intelligence products to make sure they are ethical. It’s not only about introducing restrictions, but also about preparing government programs in advance so as not to find ourselves in a time of crisis without appropriate instruments.

– This is a global process. Do all technology-based countries think about this and will change their legislation?

– The big powers, of course, say that the field of artificial intelligence needs to be financed. China, for example, invests hundreds of times more resources in the development of artificial intelligence programs than our country. In a comparable volume, such research is funded by the United States. All of them understand not only the importance of this area, its prospects, but also the risks. The one who prepares in advance, and will be ahead of everyone.

– What exactly are the risks? Is it a question of invasion of privacy, a struggle for the freedom of citizens?

– First of all, we are talking about invasion of privacy. The fact that we are beginning to be monitored offline and in the digital space. Secondly, about discrimination on various social grounds. Which, of course, shouldn’t happen.

For example, by indirect indications, it is categorically inadmissible to determine that a girl is pregnant, and because of this, not to take her to work. Large corporations should not monitor online publications and cell communications to decide whether to hire a person based on this. In many ways, the restrictions apply to big data that is accumulated by transnational corporations.

– Special services, the military do not fall under these restrictions?

– The restrictions will in no way affect the military sphere. This is a separate area, in which the major powers, for example the States, do not enter into dialogue, they only increase their budgets. International agreements to ban smart weapons will not pass. Most likely, drones, which themselves make the decision to strike, and smart mines that distinguish the soldier of one side from the other, will become widespread. All sorts of horrors are possible here if an agreement fails.

As Stanislav Ashmanov notes, the development of smart weapons and other military products in the field of artificial intelligence will not fall under legal restrictions
Gettyimages.ru

– Is there no regulatory mechanism for the military?

– Not yet. Moreover, the latest American strategy on artificial intelligence spelled out that it is necessary to achieve the presence of artificial intelligence in all government organizations: for example, for intelligence to use human-machine methods of working with information. Artificial intelligence should provide various tools for data grouping, filtering, and the intelligence officer will use them.

– Do you think that, in principle, any government with the introduction of such systems becomes more authoritarian?

– Yes, and not only state power. We must not forget the power of the corporations that hold a gigantic customer base. And they have entry points into the personal life of each of us (mobile application, voice speaker, web services, etc.).

– You can talk as much as you like about the social responsibility of business, but without mechanisms of public control, it is impossible.

– This is regulated only by laws. There are laws on banking secrecy, medical secrecy, and personal data.

Now we are talking about the fact that it is impossible to let automatic systems and algorithms make decisions about human destinies. We should not see robotic judges, robotic lawyers who decide how to distribute children between parents, to whom to give property.

– How are things going with artificial intelligence in Russia – both in terms of legislation and in the field of technological development? And at what level do we participate in global competition?

– Russia already has working groups on the ethics of artificial intelligence, there is a concept for regulating this area. But there is no direct legislative regulation yet.

There are companies in Russia that develop world-class products. For example, Yandex, ABBYY, Kaspersky Lab and others. But in general, we have a very small share of the global market for artificial intelligence products. At the same time, according to the level of competence, according to the number of specialists in this field, we are the first in the world. Our school still maintains its position. This is natural, because a huge number of programmers and mathematicians have been drawn into the field of artificial intelligence. There could be more products with AI, but this direction in Russia is underfunded, therefore, in particular, many personnel go to foreign corporations. It is necessary to stimulate demand, finance development.

– Could a thinking, self-aware artificial intelligence emerge?

– My subjective opinion is that consciousness in a car cannot arise at all. Instead, it is necessary here and now to deal with practical problems that can benefit society. But I know philosophers, publicists, and even developers who think differently.

– Now most of the references to artificial intelligence are in one way or another related to machine learning and education. But this is not the same thing?

– Artificial intelligence actually consists not only of machine learning, there is also a direction of expert systems. What is the difference? Let’s say we asked doctors how their diagnosis algorithm and decision-making mechanism work. Then this algorithm was programmed. It turns out an expert system.

Artificial intelligence methods include machine learning and the creation of expert computer systems.

We use machine learning when we cannot find out from a person exactly how he solves a problem, but we can collect examples of a solution. On a large data set, you can train, for example, a neural network or any other algorithm, of which there are now many, to reveal implicit patterns in this data.

– Both directions, according to the descriptions, seem quite harmless. So what is the main danger of artificial intelligence?

– The danger is that we delegate to artificial intelligence the responsibility to make decisions about our destinies. And he does not possess any, so to speak, moral computer, moral calculator. We can assume that he does not care about us.

And if we hand over important decisions to the machine, we will be held hostage by it. Moreover, it turns out that we are not just hostages of the algorithm, but also hostages of the developers of the algorithm. This is such a “big brother” in which a certain “father-developer” becomes the main manager. A digital dictatorship will emerge. This is not what you want to come to.

– What is the artificial intelligence market now?

– The market is developing rapidly. On average, the growth in Russia is also significant. Compared to other areas, our market is growing seven to ten times faster. All large companies see the economic benefits of using artificial intelligence. For example, if somewhere you can replace the call center with a voice robot, they will invest in it.

Small businesses can only use “boxed” products, the development of something atypical for them is beyond their means. The same, for example, face recognition is already a “boxed” product. I paid for a web service, connected it to the camera – and that’s it, the camera turns into a smart one. Recognizes, for example, employees who came to work, returning customers, determines gender and age.

– In what areas does artificial intelligence bring the greatest public benefit?

– In industrial organizations, artificial intelligence reduces the number of accidents and breakdowns in production through predictive analytics and increases productivity through optimization of technological processes.

The most promising AI developments are in medicine. Where it is necessary to take into account a huge number of factors, and qualified specialists are not enough or they are very expensive. Medicine is an area in which artificial intelligence will be able to bring the most significant benefits to society in the coming years.

Leave a Comment