Today it is almost impossible to read news without hearing about the further expansion of artificial intelligence, whether it is regarding the development of a software used in everyday life, such as translators or search engines, or innovations connected
Today it is almost impossible to read news without hearing about the further expansion of artificial intelligence, whether it is regarding the development of a software used in everyday life, such as translators or search engines, or innovations connected to highly intelligent and data processing applications such as Google AlphaGo program, which defeated the world's first Go player this year. But what is artificial intelligence exactly, and what has been researched in this field by now?
More and more activities are being tested by artificial intelligence (AI), which is also competing with humans in creative processes like writing novels, poems, or composing classical music. In 2016, the first-ever pop-song composed by AI was launched, with the title Daddy's car. For many decades, Hollywood's science fiction movies have been influenced by the potential impact of intelligent robots on mankind. In these films, machines are competing with human mind and physics. Just think about Kubrick's cult sci-fi, The Space Odyssey (1968), the legendary sci-fi action movie Terminator 2 – Judgement Day (1991) or from the lighter genre, the recent Her (2013), and Ex Machina (2015) movies.
The ultimate goal of AI research is to create an intelligence which can perform any intellectual task that the human mind is capable of.
But what is artificial intelligence exactly, and what has been researched in this field by now? Smart algorithms, which are programmed intelligently and provide solution for a particular subtask, can be regarded as artificial intelligence. IBM’s Deep Blue chess program, which defeated Garry Kasparov, the world-best chess champion in 1997, was one of the first examples of its kind. But that was just the elementary school in AI development. Nowadays, scientist can develop machines that are not only smart, but also able to constantly learn and train themselves to be smarter. This phenomenon, which is called machine learning, can already be found in spam filters, in more and more accurate translation programs, or in Netflix's search engine. Moreover, the master level of machine learning, the so-called deep learning machines are not only capable of recognizing and editing textual patterns, but they are also trained for image recognition, which, for instance, will be a key factor in the production of self-driving cars.
As mentioned earlier, these programs have been created for a narrow task, and therefore, they cannot be considered as a complete or general intelligence. However, the ultimate goal of artificial intelligence research is to create an artificial general intelligence (AGI), which is capable of doing any intellectual task that the human mind can perform. Despite its user-friendly software, AGI can be very harmful to humans. It may control human economic development, market processes, security policy issues and it can exploit the sources of humanity. Scientists have warned that right after the creation of an AGI, self-recursive super-intelligence AI will soon emerge, which will be an entity beyond the capabilities and performance of the smartest person ever. In the 1950s, John von Neumann had predicted that it is only a matter of time before an AI, which underwent an intelligence explosion, will control our future for its own purpose, or at least affect our history in such manner that it will no longer continue in its previous form. Since then, scientists and professionals dealing with AI research are trying to give predictions, or to make preparatory guidelines for the time when this technical singularity occurs.
Elon Musk (SpaceX), Bill Gates (Microsoft) Stephen Hawking (University of Cambridge) – just to name a few professionals from the group of experts, who have recently expressed their concern about artificial intelligence research, and about the possible risk connected to the rise of superintelligence which exceeds the human mind's capacity. Elon Musk, along with others, has also submitted a request to the United Nations, and asked for the ban of ‘lethal autonomous technologies’, such as ‘killer robots’. Recently even Russian President Vladimir Putin mentioned AI in his speech to students, and told that “the leading power in AI research will control the world". There is no doubt that the global competition for the leading role in AI has begun, and – unlike in the situation regarding nuclear weapons – in the case of the super intelligence, a single power may rise. According to Nick Bostrom, philosopher, researcher and author of the well-known book Superintelligence, however, both the scenario of one leading AI superpower or a competing bipolar AI model would mean existential risks for mankind. In any cases, it seems like preparedness and greater control over machines will soon become a common concern for all nations.
"The leading power in AI research will control the world"
Scientists outline different scenarios about the speed of AI research as well. Some of them predict a very fast development, and there are those who are less concerned about the AI explosion. They believe that people will have enough time to prepare for the superintelligence, and that they will be able to keep the machine in a box with a red button, which can be shut down in case of danger. Perhaps many of these scenarios sound too futuristic, but according to a survey among AI experts, it is likely that by 2050 an AGI (artificial general intelligence) will be created. There are even more optimistic predictions, such as the viewpoint of Raymond Kurzweil, inventor and futurist, who expects the AGI for the 2030s. Soon after the AGI appears the explosion of superintelligence is expected, and according to some predictions, it could happen in this very century. By the opinion of Eliezer Yudkowsky, writer and AI researcher from Global Catastrophic Risks, “the AI runs on a different timescale than you do; by the time your neurons finish thinking the words 'I should do something' you have already lost.”
According to researchers, in order to keep pace with the machines, individuals will strive to become more and more intelligent with the help of increasing brain capacity. There are different ideas for getting more efficient brains. Some predicts that built-in nanobots (microscopic size robots) will boost our brains, allowing us to get even more data by connecting to the Cloud. Other researcher suggest that no such intervention is needed, because digital devices used outside our body will develop to such an extent that the processing ability and efficiency of the human mind will increase greatly. At the same time, due to genome sequence selection enabled by the genome research results, there may be a higher chance to increase brain capacity in line with the pace of AI development.
But in what extent will AI be able to transform the economy and the society, and what effect will it have on the labour market? Will the employee, who becomes redundant due to robotisation, be absorbed by some new or existing industry, as it happened during the industrial revolution, and later, when workers moved from the field of production to that of services? According to Erik Brynjolfsson from MIT Center for Digital Business, we are already living in a new digital age, where more and more routine jobs are, or will be replaced by the machines. This process is also expected to take place in the sectors of accounting, media, finance and commerce. Sooner or later, certain professions which require very deep lexical knowledge and practice, such as the ones in the fields of medicine, music or creative arts, would also be replaced by AI. Just think of Watson, a program with specific AI capacity developed by IBM, which, by scrutinising all the relevant medical cases available on the web, has a better chance to make the right decision concerning the diagnosis as well as the therapy than any human doctors.
With the proper initial conditions, AI can contribute to the well-being and safety of humanity.
Another exciting question is that who will buy AI products and services, if – due to these abrupt changes – unemployment begins to rise and incomes start to decline. Even though the mentioned processes can be delayed (by regulation, financial (dis)incentives, or the resistance of labour organizations), it seems impossible to stop robotisation. According to Nick Bostrom, in the era of superintelligence, it is entirely possible that digital brains will dominate the economic system, and although humans can survive for a while on their capital investments, machines will eventually crowd them out of the economic system, because their costs of living (food, housing) become too expensive under the new circumstances.
A somewhat more optimistic opinion is articulated by Brynjolfsson, who believes that if we can learn how to compete together with machines and not against them, we will be able to use the processes of digitalization to our benefit. Although in 1997 AI has defeated Kasparov, the world's greatest chess master, in freestyle team competitions, which has become popular since then, the winner teams usually do not have any superintelligence or grand masters in their rows, instead, they create a very good cooperation between man and machine. This example shows that an expediently applied AI can contribute to the achievement of humanity’s goals rather than being a threat to those.
In view of the aforementioned considerations, superintelligence has a tremendous potential. If it acts in the interests of humanity, it may be able to solve major problems, such as famines, ecological catastrophes, epidemics, flawed economic distribution systems or international conflicts. An extremely exciting question is, however, how can we create a friendly superintelligence, an ethical AI that always operates in a desirable way. As machines have no emotions, even if they turned against people, it would not be the manifestation of their will, but a logical step toward some specific outcome defined by their programs. In any case, a potentially unfriendly AI could represent an existential risk which is better be mitigated beforehand. Yet, human values are fairly complex, and it is very hard to code certain abstract preferences, such as how to make humanity happy. The situation is similar to the story of the golden fish and the three wishes, except that this time humanity can only wish once in order to be well prepared for the rise of the superintelligence. The solution, according to Bostrom, would be a motivational system based on indirect normativity, in which the knowledge needed for the survival of mankind and other fundamental values would be embedded into the system of the AI, while the preferred outcome would be specified indirectly: the program should find the same solution that we would find, if we were much smarter and had more time to think about the underlying problem. This way, with a properly established set of initial conditions and an expediently formed “thinking” framework, AI and superintelligence could be our loyal companions, in our efforts to create a better future for humanity.