If you read the New York Times yesterday, you probably saw the story where Elon Musk warned Facebook researchers about artificial intelligence. He claims A.I. is far more dangerous than nukes. Musk is the CEO and Founder of Space X and Tesla.
Musk expressed his feelings at a dinner party Mark Zuckerberg, Facebook CEO, hosted at his home in Palo Alto, Calif. Musk said. “If we create machines that are smarter than humans, they could turn against us. The tech industry should consider the unintended consequences of what we are creating before we unleash it on the world.”
I asked several friends, in the tech and science fields, to give their reaction to what Musk said.
Dr. Steve Mandy, Miami Beach, FL.
“Did you ever see the 60’s SciFi movie Forbidden Planet? It tells the whole story.”
Gary Arlen, President, Arlen Communications, Bethesda, MD, research/analysis and consulting firm.
Nearly 30 years ago, I attended a seminar at Harvard where some participants described separate Pentagon-funded programs to develop robots, artificial intelligence and other features that would be the building blocks for synthetic soldiers. I’ve thought of this often during the current debate about the value of AI. One of the flaws back then was that such replicants (as in BLADE RUNNER) would lack human emotion and understanding/caring, which may have been good in a heartless war-fighter.
“So now as a society, we’re confronting that same issue which goes beyond “superintelligence” as discussed in the Times article. Yes, AI robots (as “personified,” pun intended) in movies, could be dangerous. But there are more sinister ways that A.I. affects our lives in non-hominoid ways, for example: personal monitoring as described in my recent article about China’s AI juggernaut, such as privacy invasion. http://www.nxtbook.com/nxtbooks/manifest/i3_20180506/index.php#/20.
“Since AI is truly a global science/technology/moral issue, there are worldwide implications. But as seen at the Zuckerberg/Senate hearings, these decisions are ‘way above the capabilities of policy-makers. I expect that AI developments will take shape and become part of our way of living, starting with industrial uses. And the arguments will continue, possibly until a superintelligent, sentinent ‘being’ can dominate a flesh-and-blood opponent. I’m not putting a clock on that one.”
Rob Reis, President, Higher Ground, Palo Alto, CA.
“When the hacking of our supposedly-secure credit card databases and the dying from supposedly-safe autonomous cars becomes a thing-of-the-past, only then is our software industry ready to take on super-intelligence.”
Dick Krain, Retired Senior Executive, Grey Advertising, NYC.
“If a self-driving car approaches an intersection and wants to make a right turn and there is a car coming from the left, it must decide if the other car’s speed will permit it to turn, if the car will slow down or stop to permit it to turn, if the car will speed up and beat it to the turn or if it should just stop and wait for the other car to pass.
“Now, if the other car is also self-driving, you could build in some form of communications device to take over and coordinate the two machines. But, what if the other car is driven by a human? What happens then? Will the self-driving car be able to correctly forecast what the human driver will do? Will the human driver be able to determine that the car waiting to make the turn is a self-driving car? A bad decision can lead to injury or death to the self-driving car’s passenger or the human driver in the other car.
“As more and more self-driving cars replace human driven cars, will essentially the self-driving cars take over the responsibility of driving? Will traffic rules have to change? What about pedestrians? How do they decide if it is safe to cross the street? If they are hit by a self-driven car, who is at fault? Add bicycles and skate boards into the equation and you see the clear dangers of artificial intelligence.”
Maurice de Hond, Founder, The Steve Jobs School, Amsterdam, Holland.
“My opinion is that, like always, developments in technology give a lot of blessings, and also some dangers. It takes time before we are aware of those dangers, and get them under control.
“This will also be the case with A.I. I think the real danger will come in 20/30 years. The danger will not be the technology itself. It could get used in the wrong way by people who have no understanding of the consequences.”