I was lucky enough to meet with Ben Goertzel in his lab at Hong Kong Science Park to take a tour of Hanson Robotics and meet with the team working on his Artificial General Intelligence project, OpenCog.
The lab itself was not that impressive, a fairly cramped space filled with synthetic disembodied faces attached to robot frames. But it was a humdrum of activity as all around people were fiddling with wires and adjusting faces and furiously typing away at their keyboards. And even though the space they are working out of may not be overwhelming awe inspiring, what they are attempting to do here is. To put it simply, they are trying to create humanoid robots that are as intelligent as we are.
And that is just the beginning as they are quick to point out that once human level intelligence is reached these machines will quickly move past that target as they will then be able to rewrite their own programming and continually improve on what their human programmers had created.
Since 2008 Ben and his team have been working away trying to solve the problem of intelligence. They spend most of their time writing the code and algorithms that control a variety of cognitive abilities such as language acquisition, memory functions, visual and spatial awareness, etc., and then weaving them all together. Significant time is also devoted to interacting with the robots as the programming allows the machines to learn from their experiences communicating with humans.
Some might see the connection to Westworld. I went to dinner with Ben and some of the staff and the topic of Westworld came up a couple of times, mostly because I kept bringing it up. It seems we don’t have to worry about a park where humans get to live out their wildest fantasies on scripted robots because they seem to believe that these machines will achieve sentience long before any park like that is built. 2025 according to Ben to be exact.
If he is right, and I’m not saying he is, I don’t know enough about this to even be able to make a guess, but assuming he is right, that means we have roughly 8 years left as the most intelligent thing on this planet. I need to stress that these machines won’t just be more intelligent in some ways like how a calculator is better at computing numbers than any human ever could be, they will be more intelligent in every conceivable way. More logical, more rational even more emotionally intelligent as they will be able to read facial expressions, see brainwave activity and blood flow, and have a better understanding of the biological processes that drive our feelings and emotions. All of that raises a lot of very interesting questions as suddenly, and it may be very sudden, we would be living in a world in which humans are no longer the most intelligent species on the planet.
I doubt most people would be so willing to accept living in a world where ‘robots’ are in control. But if we were to push back, I can’t see it going well for us. After all, the reason why we are in control today is not because we are the strongest or the fastest, it is because we are the most intelligent. If that is no longer true, what advantage could we possibly have over these things? Sure we have numbers and all sorts of weapons, but most of our weapons and defense systems are conveniently for them being brought online and we aren’t exactly an organized collective, especially when shit hits the fan.
It seems to me that the only solution would be for us to accept our place as number 2 and cede power to whatever these things end up calling themselves. Through a variety of brain machine interfaces we may even merge with them. That might not be such a bad thing, humans haven’t exactly done the best job of running the world thus far, our track record of just and benign rule is pretty abysmal. Let’s just hope they are more enlightened rulers than we were.
Also if we were on the verge of creating genuine intelligence, would foreign powers not try to stop it? The emergence of artificially intelligent sentience probably should be an issue of national security.
There is a lot of doubt and skepticism that comes up any time the subject of AI is brought up, with most people thinking it is either never going to happen or it is too far off in the future for us to bother worrying about it yet. The problem with that is that really only a handful of people in the world are qualified to even assume to know when this is going to happen. You need a thorough understanding of neuroscience and computer science and be up to date with all that is happening in AI research to be able to even try and answer that question. The only doubt I still have comes from the fact that no one has a complete understanding of how our brains work, and how consciousness came to be, yet.
But for now the sensible thing to do is defer to people who understand this stuff best and the consensus among them seems to indicate that we will reach Artificial General Intelligence sometime in the next 10 to 40 years. At the very least it now seems to be a question of when, not if. No one can accurately predict what will come next, what is important is that we start figuring out what this means for society and how we want to live in that kind of world.
And hell, even if none of this happens it’s at least interesting to consider. Right Einstein?
About the author
Benjamin Stecher is an independent Canadian researcher whose interests primarily include neuroscience, artificial general intelligence, education reform and China.
Humanizing Robots: How making humanoids can make us more human by David Hanson