I fully retain my belief that in no case should be afraid of machines, but of how a person can use them. Yes, technology is evolving, yes, in terms of computational ability, decision-making in a particular area, flexibility and the ability to self-learn, many times higher than human, machines remain machines. The artificial intelligence of the robot did not "invent its own language" on a whim, of its own free will, because it is absent in principle. He did this because this function was programmed in him by his designers. In order for a machine / robot / program to really pose a danger to humanity (in the manner of Skynet from the Terminator), it is necessary that someone programmed into it the motivation for destructive actions. At the moment, machines are tools, and all actions performed by them are assigned to them by people who work on their creation.
In a narrow sense, you can even now be afraid of any machines, starting from a simple vehicle , handling which without observing the simplest safety precautions or in the event of failure of some key mechanisms can lead (and does) lead to disastrous consequences. This is a risk factor.
If an inventor deliberately puts in the machine the motivation to be dangerous for a person, then the question should arise to the inventor himself, but not to the machine itself.
Technological singularity is a hypothetical moment, after which, according to the supporters of this concept, technological progress will become so fast and complex that it will be impossible to understand, presumably following the creation of artificial intelligence and self-reproducing machines, the integration of humans with computers, or significant a leap forward increase in the capabilities of the human brain due to biotechnology.
Vernor Vinge believes that the technological singularity may already occur around 2030, while Raymond Kurzweil gives 2045 . At the 2012 Singularity Summit, Stuart Armstrong collected expert estimates, the median value of this sample was 2040 .