A hammer makes me stronger. Why not use it? AI makes me smarter. Why not use it?
Robots and Artificial Intelligence (AI) are super-hot topics right now. Man against machine. An epic war unfolds! Is mankind still the pride of creation? Will we be surpassed by strong, intelligent robots that never sleep? Or, will cyborg technology save the world and turn us into demigods?
Given AI’s high-profile presence in the media, it’s hardly surprising that many big names pop up to air their words of wisdom. Time to parcel up the most out-there opinions. And since opinions matter, I’ll add my own to each of the big-name speculations.
But first, let’s start with the commonalities: Mainstream opinion has it that the impact of AI on society at large will be huge. If you don’t agree, just take a look back at what was once the future:
In the nineties, self-parking cars were unthinkable. Yet, they have been commercially available since the Toyota Prius in 2003. And today we’re on the brink of seeing cars that are entirely self-driving.
Paro, the robot baby seal that assists in the therapy of dementia patients in Japan, was first marketed in 2004. At the time, most westerners shook their heads in disbelief. Yet, given demographic trends, it is now widely accepted that there is no way of avoiding robot-assisted care for elderly people.
If you’re old enough, you surely remember K.I.T.T. from the eighties TV series Knight Rider. If you do, you’ll probably agree that at the time, a talking device was pure science fiction. Yet, in 2010, Apple launched Siri, the cheeky personal assistant. And today, Echo, Amazon’s home assistant, is ordering pizza for more than 25 million households.
AI Good or Bad?
So, AI is pretty exciting!
But is it also good?
Right, that’s exactly the question that we asked our random panel of Silicon Valley (and other) celebs. And here’s what they said.
Elon Musk on AI: The Merchant of Gloom
The global arms race for AI will cause World War III
The case against AI is led by Elon Musk. He thinks the risk of AI running amok is too high. In the end, he believes, the machines will win. Check out the “Metalhead” episode of the Netflix series Black Mirror, and you’ll understand exactly what he means. Or watch Robocop. Or Terminator. Or The Matrix.
At first, Musk’s statement seems to be a heart matter. What’s strange though: Tesla is taking the idea of the self-driving car to the next level. That’s pure AI. So, is there good AI and bad AI? And who decides which is which? While Musk does have a point in emphasizing the importance of AI safety, he is probably also being very opportunistic.
Mark Cuban on AI: The Entrepreneur
The world’s first trillionaires are going to come from somebody who masters AI.
Mark Cuban, the Shark-Tank host with a net wealth of $3.7 billion, has never been shy about thinking big. And he defines himself largely via entrepreneurial success, which, in his view, means financial success. He is very optimistic about AI, and – to my knowledge – has not made any statements about AI safety. However, he does assert that AI will be a big disruption to the jobs markets. And he recommends preparing for it by favoring college degrees in neural networks, deep learning, et al.
While Cuban’s prediction might indeed become reality, we’d expect a true leader to think beyond. A concentration of power and money by whoever wins the AI war clearly has major sociological, geopolitical, and economic consequences. A leader would analyse them, assess them, and make recommendations on how to mitigate the risks. Cuban does none of this.
Garry Kasparov on AI: The Idealist
We will need the help of machines to bring our grandest dreams into reality.
In a fascinating and funny TED talk, Kasparov makes a case for AI. His argument runs along two lines: Firstly, AI is not true intelligence, it’s brute force. Secondly, AI is only as strong as its creators. So, he argues, the true intelligent beings are the inventors of Big Blue, and not Big Blue itself. Looking ahead, he believes the game changer will come with machines complementing humans. Machine: memory, instructions, processing speed, data crunching, objectivity. Humans: intuition, experience, creativity, judgment, passion. And most of all: understanding. And the combination of both can be used to solve many problems.
While this view is popular among many, I personally believe it will not hold true forever. Yes, the pie (i.e. the economy) will get bigger thanks to AI. But who guarantees that the new jobs AI creates will be filled by humans? Even if AI uses different methods than the human brain, it will replace more and more humans in every-day tasks.
Stephen Hawking on AI: The Agnostic
Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst.
The late, great cosmologist Stephen Hawking recognized AI’s enormous potential to help humanity, but he also raised concern about the enormous risk of it slipping out of control and doing unimaginable harm. Moreover, he warned that legislation aimed at managing risk would be ineffective because it would never catch up with AI’s ever-increasing complexity.
I agree that there is a large element of uncertainty in the risks of AI (unknown unknowns). And there will have to be more debate on how we as a society want to deal with the new challenges. But merely saying “be careful” is of limited help, as every parent knows only too well.
Bill Gates on AI: The Economist
Certainly, we can look forward to longer vacations.
Bill Gates also agrees that AI will have a positive impact on mankind. Economically, AI will allow us to produce more goods and services with less human effort. That’s good news! The question is: who will benefit? Gates sees problems especially in the mid-term, because AI will evolve faster than societies can adjust. One possible solution, he suggests, is to tax robots in more or less the same way that we tax individuals.
This one is simple:
Mark Zuckerberg on AI: The Blindsided
Yeah, technology can generally always be used for good and bad.
Ha! Ha! Ha! To be fair, the quote dates from before the Cambridge Analytica scandal came to light. It was in reply to Elon Musk’s AI bashing last year. Right now, Zuckerberg isn’t saying anything much about AI, let alone AI safety. Reason being, his lawyers are probably still working out the details of the most innocuous wording. Consequently, he appears in our panel as an unofficial competitor only, and no expert judgment is made.
I’ve talked to Mark about this. His understanding of the subject is limited.
— Elon Musk (@elonmusk) 25. Juli 2017
So now: AI good or bad?
Who am I to say?
But I do believe in Amara’s Law. It states that the short-term impact of game-changing inventions is usually over-estimated (i.e. hyped), while the long-term impact is underestimated. Take steam trains: It took decades to replace horse-drawn carriages. But once done, it not only changed the cost and comfort of transportation, but also the parameters of society, to a point where it triggered industrialization.
Applied to AI, Amara’s Law means we might have more time than we think. True, in our ever more complex world, the challenges for regulators will be massive. But still, I do have hope that laws will manage to keep pace with progress so that society at large will be able to redefine the rules.
On the other hand, it is difficult to imagine that changes of such magnitude will occur without friction. And these challenges need to be addressed wisely. For instance, I don’t want Mark Cuban to be right. Extreme concentration of wealth is bad both economically, and for societies at large. And, unclear as Musk’s motivation is, he does have a very valid point about the geopolitical risks of AI.
Let’s take AI Safety seriously. Let’s work with historians, philosophers, religious leaders, sociologists, and economists. Let’s make sure our children will be able to reap the benefits from Artificial Intelligence, to live their lives in prosperous peace.
Don’t agree? Want to clarify? Leave a comment below! Looking forward to continue the debate!