The Korpus publishing house publishes Max Tegmark Life 3.0, a book by the Swedish-American physicist and cosmologist, professor of the Massachusetts Institute of Technology, in Russian. In it, the author discusses the changes that the emergence of super-intelligent artificial intelligence on Earth will bring to life, and considers its possible advantages and disadvantages – from the appearance of a large number of new technologies to the most unpredictable consequences. “Snob” publishes one of the chapters

Artificial Intelligence for Space Research

To begin with, what is close to my heart: c space exploration. Computer technology has allowed us to fly to the moon and send unmanned spacecraft to explore all the planets of our solar system and even land on a satellite of Saturn Titan and a comet. As we will see in Chapter 6, the future AI can help us explore other star systems and galaxies if it works without glitches. On June 4, 1996, scientists hoping to investigate the Earth’s magnetosphere happily reported that the launch vehicle of the European Space Agency Arian-5 had soared into the sky with the scientific instruments they had built. Thirty-seven seconds later, their joy faded away because the rocket exploded, turning into a giant fireworks worth millions of dollars. The reason, as it turned out, was in the software, in which there was a “glitch” when it began to try to operate with a number that did not fit into the 16 bits of memory allocated for it. Two years later, the NASA Mars Climate Orbiter spacecraft accidentally entered the atmosphere of the “red planet”, and this led to its death, and all because its two software modules used different units for power, resulting in an error of 445% when calculating the required engine thrust. This was the second most expensive “glitch” in the history of NASA: for the first time, their Mariner-1 mission to Venus ended with an explosion immediately after launch from Cape Canaveral on July 22, 1962, after the flight control software stopped working due to an incorrect sign punctuation. As if specially to prove that not only in the West did they master the art of computer “glitches” in space, The Phobos-1 Soviet project failed on September 2, 1988. It was the heaviest interplanetary spacecraft ever launched; the mission’s goal was to land a station on the surface of Phobos, a satellite of Mars, and all of this collapsed when the missing hyphen in the program’s text was interpreted as a “end of mission” command that was sent to a spacecraft on its way to Mars system.

The lesson we can learn from all these stories is about the importance of what is usually called testing or software testing, the task of which is that the installed “software” fully meets all the necessary requirements. The more lives and resources at stake, the higher should be our confidence that the software will work as it should. Fortunately, artificial intelligence can help automate and improve the testing process. For example, the core of the universal operating system seL4 has recently been subjected to a comprehensive mathematical test in order to provide a reliable guarantee against system crashes and the execution of commands that threaten security. And although she still does not have the same bows as MS Windows or MacOS, working with her, you can be sure that you will not see either the “blue screen of death” or the “rainbow wheel of fate”. The United States Department of Defense Advanced Study Agency (DARPA) funded the development of a series of highly reliable open source software for cyber-military systems (HACMS), each of which is reliably reliable. To introduce such tools in a wide appeal, it is necessary to make them quite powerful and easy to use. Another difficulty lies in the fact that software testing will have to be carried out when transferring it to robots or some other new environments, and the traditional software itself will be replaced by systems with artificial intelligence that are able to learn, and therefore change their behavior, as mentioned in chapter 2. The United States Department of Defense Advanced Study Agency (DARPA) funded the development of a series of highly reliable open source software for cyber-military systems (HACMS), each of which is reliably reliable. To introduce such tools in a wide appeal, it is necessary to make them quite powerful and easy to use. Another difficulty lies in the fact that software testing will have to be carried out when transferring it to robots or some other new environments, and the traditional software itself will be replaced by systems with artificial intelligence that are able to learn, and therefore change their behavior, as mentioned in chapter 2. The United States Department of Defense Advanced Study Agency (DARPA) funded the development of a series of highly reliable open source software for cyber-military systems (HACMS), each of which is reliably reliable. To introduce such tools in a wide appeal, it is necessary to make them quite powerful and easy to use. Another difficulty lies in the fact that software testing will have to be carried out when transferring it to robots or some other new environments, and the traditional software itself will be replaced by systems with artificial intelligence that are able to learn, and therefore change their behavior, as mentioned in chapter 2. each of which is verifiable reliably. To introduce such tools in a wide appeal, it is necessary to make them quite powerful and easy to use. Another difficulty lies in the fact that software testing will have to be carried out when transferring it to robots or some other new environments, and the traditional software itself will be replaced by systems with artificial intelligence that are able to learn, and therefore change their behavior, as mentioned in chapter 2. each of which is verifiable reliably. To introduce such tools in a wide appeal, it is necessary to make them quite powerful and easy to use. Another difficulty lies in the fact that software testing will have to be carried out when transferring it to robots or some other new environments, and the traditional software itself will be replaced by systems with artificial intelligence that are able to learn, and therefore change their behavior, as mentioned in chapter 2.

Artificial Intelligence in Finance

Finance is another area that has been transformed by information technology, allowing for the effective redistribution of resources around the world at the speed of light and to provide affordable funding for everything from mortgage companies to start-ups. Progress in the development of artificial intelligence systems is likely to provide even greater opportunities in the future for making profits in financial transactions: most of the decisions to sell / buy stocks in stock markets are now automatically made by computers, and my graduates from MIT are tempted every year with astronomical starting salaries offering to work on improving sales algorithms.

Testing is extremely important for financial software, in which the American firm Knight Capital could be convinced of its bitter experience on August 1, 2012, having lost $ 440 million in 45 minutes after installing untested sales software. The famous trillion dollar crash on May 6, 2010, known as Black Tuesday or Flash Crash, deserves special attention for another reason. Although on this day for half an hour there were massive computer failures, during which the prices of shares of some large companies, like Procter & Gamble, ranged from pennies to $ 100,000, the problem was not caused by glitches in programs or errors the work of computers that could be identified through testing. The reason was in deceived expectations:

“Black Tuesday” clearly demonstrated the importance of what is commonly called validation in computer science: if during testing the question is asked: “Is the system built correctly?”, Then during validation the question is put like this: “Is the system built correctly?”. For example, was the system built on the basis of prerequisites that may not always be valid? If so, how can uncertainty be improved?

Artificial Intelligence for Production

Needless to say, artificial intelligence offers great opportunities for improving production by controlling robots, the use of which increases efficiency and accuracy. Relentless in their development, 3D printers can now prototype everything from office buildings to micromechanical devices the size of a grain of salt. While huge industrial robots build cars and airplanes, compact and inexpensive computer-controlled milling machines and other similar devices due to their accessibility do not only go to large factories, but thousands of private enthusiasts, makers, can afford them. to the whole world, which in their small communal workshops –

“Fab-labah” – materialize their ideas. But the more robots surrounds us, the more important it is for their software to be thoroughly tested – tested and validated. The first person to be killed by the robot was Robert Williams, a worker at the Ford plant in Flat Rock, Michigan. In 1979, the robot, which was supposed to deliver parts from the warehouse, went out of order, and Robert Williams set off for parts himself. Suddenly, the robot silently earned and smashed his head, and continued to beat him against the wall for 30 minutes, until other workers learned about the incident. The next victim of the robot is Kenji Urada, a maintenance engineer from the Kawasaki plant in Akashi, Japan. In 1981, being engaged in a broken robot, he accidentally hit the switch and was crushed to death by the robot’s hydraulic arm. In 2015, a 22-year-old contractor at one of the Volkswagen plants in German Bountal assembled a robot capable of picking up car parts and installing them in place. But something happened, and the robot grabbed him and smashed to death on a metal plate.

Although each such event is a tragedy, it is important to note that they constitute a tiny fraction of all work accidents. Moreover, their total number decreased with the development of technology, but did not grow: in the US, their number was 14,000 deaths in 1970 and 4821 in 2014. All three of the aforementioned tragedies show that adding intelligence to unreasonable machines should contribute to the further growth of industrial safety, if you teach robots to behave more cautiously towards people. All three accidents could have been avoided if validation had been used: the robots did not harm because of “glitches” and not because of their malice, but simply because the reasons for their work were not valid: that there are no people here or – This is a kind of spare parts.