I’ve just finished watching a video conversation between Steven Bartlett and Mo Gawdat that I found compelling for a variety of reasons. Bartlett, if you are unfamiliar with his name, is a British-Nigerian entrepreneur and podcaster of some note in London. Mo Gawdat, Steven’s guest, is an Egyptian entrepreneur and writer, the former chief business officer for Google X and author of two bestsellers, Solve for Happy and Scary Smart.
“EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of Artificial Intelligence!”
Yep, that’s the conversation and it occurs over two hours, which is a major turnoff for me, but the premise is perhaps the biggest story of human history and so I grit my teeth and allowed my computer to take over a couple hours of my day.
Let me explain how I approach both reading and listening. I’m the kind of guy who makes a bargain with a book, that I’ll give it sixty pages and if it hasn’t caught my interest by then, out it goes—or at least back on my bookshelf. I make the same deal with podcasts or videos, but cut the time allowance to fifteen minutes.
I was still there at the end of the Bartlett-Gawdat video, fascinated.
Mo is a big hitter, with an illustrious technology career
Toggling between his private business entities and very senior roles at IBM and Microsoft, he was most recently Chief Business Officer for Google's secretive research and development facility known as Google X.
He left Google after being struck by what was, for him, a terrifying revelation about the future of AI, having witnessed an eerie moment in the firm's R&D labs. AI developers were working with Google X on dexterous robotic arms and, after what he described as slow progress, one day he watched a robotic arm reach down and pick up a ball. That’s a very difficult task they had worked weeks to achieve.
But even more spooky, every single arm in the group could immediately replicate the move and after another two days, the arms could pick up just about anything. Essentially, they had taught each other—computer to computer.
He realized that machines, even at a very basic level of intelligence, have the potential to learn incredibly quickly
"The reality was,” he said, “we were creating God. AI has the potential to reach so-called technological singularity - the point at which it might become uncontrollable and irreversible. In the case of intelligent machines, this means AI could eclipse humanity and escape our control.”
So that sets the stage for the conversation to come and it’s far more questioning and understanding of other more optimistic possibilities than my intro might suggest.
ChatGPT, one iteration of AI, in its early stages is now 1,000 times more capable than the human brain
So says Gawdat, who suggests that within a year or two, because of computer-to-computer learning, it will be 50,000 times as capable at a minimum. Try to wrap your mind around that.
If (and it’s a big if) companies working in AI can keep it from a wider use by hackers, grifters and others who would use it for nefarious purposes, breakthroughs in renewable energy, desalinization, rewilding destroyed farmlands, restoring depleted water tables and including nearly all the negative influences causing climate change might well be mitigated. That’s earthshaking, in that it might yet be possible to reverse the damage humanity has done to our pristine planet, a hope I had long ago given up.
So here’s a question no one has yet asked
Is it possible, probable or even conceivable that we warring, angry, impossibly divided humans are being given a boost at our time of existential need by powers that lie outside our understanding?
Whoa! I’d like to say I had a probable (or even improbable) answer to that, but find myself stumped.
There are things in life that I have been in the habit of saying ‘I don’t disbelieve.’
The jury is still out for me on a number of issues. Not yet ready to believe in reincarnation or aliens visiting earth, I find refuge in not disbelieving. “Bullshit” is too strong a word for my limited understanding of a humanity that knows perhaps two or three percent of what is there to be known.
So, the question lingers. And I don’t disbelieve that we were somehow, just in the nick of time, set upright on a road headed toward total environmental collapse and the end of the human species.
'AI' growth will be exponential assuming the database keeps doubling. You can see that already in technologies such as MidJourney. But to crawl all that data and generate a response quickly takes time. You can see this time deficiency in ChatGPT and MidJourney. But factor in quantum computing, where the number of calclations per second doesn't just not double or tiriple, but is raised to any number of superpositional powers? That's when the bots will truly take over. And if self-teaching happens in the way described here, we're in a whole load of trouble, and it will come quicker than anyone imagines.