There’s only one reason man eats fish. We eat fish because we can. Fishes are dumb and we are smart. We are at the top of the food chain and we take whatever we want.
I therefore don’t take kindly to anyone who tries to rearrange this balance. Boston Dynamics may think it’s a cool company. But I’ll tell you what a cool company does; it doesn’t make robots that can open doors!
Chances are that if you’ve been anywhere near the internet in the last couple of days, you would have seen a robotic dog eerily opening a door. And then propping the door open for its mate to pass through. A doggone sight.
What despicable wise-assery is that? What will it do next, compete in Master Chef Australia?
Boston Dynamics, you are not cool!
Look, I’m not a Luddite. I like technology. I like Alexa and NASA. Heck, I even like the Bugatti Ciron. I like anything that moves the human race forward.
But I fail to see the correlation between moving forward and super smart canines. One day, what if they decide not to heel, not to fetch? They might well turn around and say “you fetch, dipstick!”
That’s the sticking point with artificial intelligence and machine learning. We could create something we no longer have a handle on.
I’m with Elon Musk on the dangers of AI. You might be with Mark Zuckerburg and call us “naysayers” and unnecessarily negative. But I’d rather believe a man who’s put a car in space than a guy who spends most of his time on Facebook.
Thankfully, a doomsday robot is still some way out yet. Our current AI efforts are still at the Artificial Narrow Intelligence or ‘Weak Intelligence’ stage. Artificial Narrow Intelligence systems can only do one task. Say beat you in chess or Scrabble. It can’t play both Scrabble and chess. It’s a Jack of one trade.
The AI that Elon Musk, Stephen Hawking and Bill Gates warn about is obviously not Artificial Narrow Intelligence. That AI has to have human-level intelligence.
Human-level ‘intelligence’ or ‘general’ intelligence is more than proficiency in one task. According to a group of researchers, it is:
“A very general mental capability, that among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out” what to do.”
One major challenge with our current AI efforts is trying to teach AI systems to remember stuff or learn from experience. They can’t. They just use brute computational power each time to get tasks done. In essence, they are mindless systems.
A doomsday robot will have to move from Artificial Narrow Intelligence to human-level intelligence, or Artificial General Intelligence before it can scheme to gas us.
Once it achieves Artificial General Intelligence, the common theory is that Artificial General Intelligence, being aware of its abilities, will rapidly improve itself and evolve into superintelligence or Artificial Super Intelligence. ASI could be anywhere between fifty to a billion times smarter than we are.
Fancy that. A robot a billion times smarter than we are. Nothing to worry about there.
All these, of course, sounds like science fiction, a Twentieth Century Fox idea. Probably never going to happen.
Yea, I’m sure a mission to the moon must have also sounded deranged to High Chief Nid, the hunter-gatherer.
It feels like it’s not going to be long before Boston Dynamics or any other geekstitute progress from Artificial Narrow Intelligence to Artificial General Intelligence. Just check out their YouTube feed. These dudes love robots more than they love humans!
But wouldn’t it be great to have your personal robot side-kick! It’d gush with you at a Paul Pogba’s rabona or join you in wishing Kevin De Bryune breaks a leg.
Maybe it can even teach you how to make money without working.
That, fellas, would be the real beauty of technology.
Below is an earlier post I wrote about the existential threat of AI.